I am a Computer Science Lecturer based at Salford University, UK.
As a computer science researcher I am interested in solving technical challenges by exploiting computer resources. I have been active in the fields of High Performance Computing (HPC) and Visualization, but also have interests in computer vision, graphics, haptic interfaces and virtual environments.
Throughout my research career I have been lucky to have the opportunity to collaborate with inter-disciplinary projects, on topics ranging from medical diagnostics and training through to agricultural stake holder engagement and broadcast technologies. I am passionate about being involved with other disciplines where there is an opportunity to learn about new subjects and within all of my research I have demonstrated how the fundamentals of computer science have been able to add value.
I Joined Salford University as a lecturer in July 2015. My research is currently focused heavily on developing computer science solutions to promote inclusivity and diversity throughout the broadcast industry. This aims to ensure that broadcast experiences are inclusive across different languages, addressing the needs of those with hearing and low vision problems, learning difficulties and the aged.
More recently the impact and success of his research has been acknowledged by the successful bid for a H2020 EU grant for €205,452.50 (out of the total grant of €2,682,227.50) to focus on moving his previous work in to the immersive content. The goal of project is to explore how accessibility services can be integrated with immersive media, such as 360-degree video and virtual reality. This project will explore new deployment methods for accessibility services (Subtitles, Audio Description, Audio Subtitling, Sign Language) can be provided in immersive environments. It is anticipated that these services will provide a high impact experience to a wide audience.
At the BBC I have successfully demonstrated several methods for retrieving, reconstructing and delivering accessible services, such as subtitles from which I have filed a patent on behalf of the BBC. I also developed the concept of responsive subtitles, re-blocking each caption to fit its container rather than trying to scale traditional captions to fit each display. I have also demonstrated how haptic video can be used to provide an enhanced experience to our audience. Recording accelerometer data from a camera and playing it back using commercial haptic devices in sync with the video allows us to give the viewer the sensation of what is happening in the video, such as a mountain biking stunt. This was found to not only enhance the viewing experience but also make the experience accessible to viewers with limited vision or hearing.
I also did work on Large-scale data visualization. I developed a tool to demonstrate the interaction between BBC content and social media providing an interactive graph, allowing the data to be mined in an intuitive way. Other work has involved Device Micro-location to identify where people and devices are within the home using the signal strength recorded from their blue-tooth devices. This also led into a number of projects exploring how your digital home can interact with you if it knows where you are and update content appropriately.
Funded by the National Institute for Social Care and Health Research (NISCHR), Welsh Assembly Government as a joint project between the Universities at Bangor Aberystwyth, Cardiff, Swansea
I was responsible for identifying and investigating projects where computer imaging and visualization technologies could be used to provide added value to medical applications, working with National Health Service (NHS) clinicians to provide tools which can enhance their understanding of patient data. Our main objectives was to improve the health and wellbeing of the people of Wales by developing new visualization technigues in order to improve medical diagnosis and training. The unit was established in 2011 and are one of three Biomedical Research Units funded by the Welsh Government's National Institute for Social Care and Health Research (NISCHR). It is a partnership between the Research Institute of Visual Computing (RIVIC), the NHS in Wales and the collaborating Universities.
Based on my previous experiences of building medical simulators, I identified the requirement to provide accurate real-time blood flow simulation within the user interface. I developed a model based on a computer animation approach using boids - autonomous particles which follow basic rules. Having benchmarked my model against the industry accepted Smoothed-particle hydrodynamics algorithm, I achieved very similar results at a fraction of the computational resource. I later used this algorithm within a decision support tool for cardiologists. By simulating flow through large vascular networks in real time allows the clinician evaluate different medical outcomes.
I was also responsible for developing a tool for Estimating Femoroacetabular Impingement semi-autonomously from CT data, providing a detailed description of the femoral head, with less clinician input than traditional methods. I also developed a custom haptic interface for training Percutaneous nephrolithotomy (PCNL) access which was designed as a low cost interface to an iPad application.
I collaborated with the School of Environment, Natural Resources and Geography at Bangor University in order to developed a user interface for stakeholder engagement, between researchers and farmers who work the land in order to demonstrate the effects of agricultural land use on flooding. I developed a user interface where the aerial view of a farm was projected onto a table surface. Physical markers were placed onto the table representing agricultural influences (such as livestock or trees) and their positions tracked using a Microsoft Kinect games controller. The simulation updated the display in real-time in order to demonstrate the consequences on the water flow and subsequently flooding, providing a high impact understanding to the users. A later version of the table top display used a 3D projector and active stereo glasses to allow the user to see a 3D landscape and to interact with virtual objects in 3D.
Funded by the Engineering and Physical Sciences Research Council (EPSRC) as a joint project between the Universities at Bangor, Liverpool, Hull, Leeds, Imperial. Working with Interventional Radiologists, I was involved in the development of a training simulator for teaching the Seldinger Technique, a medical method which provides access to a vessel, by means of a needle puncture into an artery. Once access has been gained to a vessel a guide-wire and catheter are inserted and manipulated coaxially to provide access to the damaged pathology. As the technical lead for the project, I was responsible for managing the other researchers involved with the project involving a distributed team from the Universities of Bangor, Liverpool, Hull, Leeds, Imperial College London and Manchester Business School.
By using real force data measured using strain-gauges mounted in the hub of a needle during a real procedure, my prototype simulator was able to provide a high level of fidelity. I developed and patented custom hardware allowing the operator to insert and re-orientate a real needle within a virtual patient. I also developed the software interface to provide a virtual fluoroscope providing the trainee with a virtual real-time x-ray, replicating the operating room environment.
A full validation study was completed and results suggested that 82% of radiologists believed that the simulator is effective for learning basic skills and 86% believed it would impact training of tool use.
Funded by the Engineering and Physical Sciences Research Council (EPSRC) as a joint project between the Universities at Bangor, Swansea, Leeds, Manchester. As part of my PhD research I was funded to develop a tool to enable High Performance Visualization (HPV) tasks to be computed by remote resources utilizing the computational grid. HPV tasks are generally characterized by high-quality graphics, large datasets, computationally-intensive tasks, large scale data distribution and often extensive data communication. A typical HPV task is a complex feedback process, involving data collection, visualization design, task parallelization, immersive visual display and interfacing with the corresponding data generator such as a simulation engine. In order to address this problem I was involved in the development of the e-Viz toolkit.
With e-Viz, the running of a visualization task primarily involves three system entities, namely a client computer, a Grid-based server infrastructure, and a broker computer. The client is supported by two software modules, a launcher application and a generic UI used to control pipeline parameters, and to display and interact with visualization results. The launcher is the entry point to the e-Viz system, and provides a wizard-based UI that allows users to specify their jobs in terms of input data sets and desired visualization output. Through Grid middle-ware, it makes calls to the web services on the broker computer. This research was rated as internationally leading by the funding body.
Supervisor: Professor Nigel W. John - Leading researcher in computer graphics and medical simulation.
My PhD research focused upon developing an exemplar application for e-Viz. I developed a novel interface for Transcranial Magnetic Stimulation (TMS) - a medical procedure in which electrical activity in the brain is influenced by a pulsed magnetic field. At the time the commercial software available provided a 2D view of the cranium taken from an MRI scan. I developed a 3D Augmented Reality (AR) interface which provided the operator with a Head Mounted Display, through which it was possible to see the patient with a 3D render of their brain aligned.
Initially a commercial optical tracking system provided the transformation between the operators viewpoint and the subjects head using clear markers however this was impractical for real world use. Therefore I developed a new approach to AR enabling real world objects to be tracked by extracting feature points from the users viewpoint rather than using markers. At the time of this work, 3D rendering could not be done on a desktop computer in real-time and so the e-Viz framework was used to autonomously deploy HPV resources to ensure that the virtual objects are rendered in real-time. e-Viz also enabled the allocation of remote High Performance Computing (HPC) resources to handle the computational requirements of the object tracking and pose estimation.
The Immersive Accessibility Project (ImAc) explores how accessibility services can be integrated with
360o video as well as new methods for enabling universal access to immersive content. ImAc is
focused on inclusivity and addresses the needs of all users, including those with sensory or learning
disabilities, of all ages and considers language and user preferences. The project focuses on moving
away from the constraints of existing technologies and explores new methods for creating a personal
experience for each consumer. It is not good enough to simply retrofit subtitles into immersive content:
this paper attempts to disrupt the industry with new and often controversial methods.
This paper provides an overview of the ImAc project and proposes guiding methods for subtitling in immersive environments. We discuss the current state-of-the-art for subtitling in immersive environments and the rendering of subtitles in the user interface within the ImAc project. We then discuss new experimental rendering modes that have been implemented including a responsive subtitle
approach, which dynamically re-blocks subtitles to fit the available space and explore alternative rendering techniques where the subtitles are attached to the scene.
A growing number of applications that generate massive streams of data need intelligent data processing and online analysis. Data & Knowledge Engineering (DKE) has been known to stimulate the exchange of ideas and interaction between these two related fields of interest. DKE makes it possible to understand, apply and assess knowledge and skills required for the development and application data mining systems. With present technology, companies are able to collect vast amounts of data with relative ease. With no hesitation, many companies now have more data than they can handle. A vital portion of this data entails large unstructured data sets which amount up to 90 percent of an organization’s data. With data quantities growing steadily, the explosion of data is putting a strain on infrastructures as diverse companies having to increase their data center capacity with more servers and storages. This study conceptualized handling enormous data as a stream mining problem that applies to continuous data stream and proposes an ensemble of unsupervised learning methods for efficiently detecting anomalies in stream data.
In order to succeed in the global competition, organizations need to understand and monitor the rate of data influx. The acquisition of continuous data has been extremely outstretched as a concern in many fields. Recently, frequent patterns in data streams have been a challenging task in the field of data mining and knowledge discovery. Most of these datasets generated are in the form of a stream (stream data), thereby posing a challenge of being continuous. Therefore, the process of extracting knowledge structures from continuous rapid data records is termed as stream mining. This study conceptualizes the process of detecting outliers and responding to stream data. This is done by proposing a Compressed Stream Pattern algorithm, which dynamically generates a frequency descending prefix tree structure with only a singlepass over the data. We show that applying tree restructuring techniques can considerably minimize the mining time on various datasets.
Audiences are increasingly using services, such as video on demand and the Web, to watch television programs. Broadcasters need to make subtitles available across all these new platforms. These platforms also create new design opportunities for subtitles along with the ability to customize them to an individual’s needs. To explore these new opportunities for subtitles, we have begun the process of reviewing the guidance for subtitles on television and evaluating the original user research. We have found that existing guidelines have been shaped by a mixture of technical constraints, industry practice, and user research, constrained by existing technical standards. This paper provides an overview of the subtitle research at BBC R&D over the past two years. Our research is revealing significant diversity in the needs and preferences of frequent subtitle users, and points to the need for personalization in the way subtitles are displayed. We are developing a new approach to the authoring and display of subtitles that can respond to the user requirements by adjusting the subtitle layout on the client device.
A method of retrieving supplementary data related to audio-video content with reference to a reference version of that content involves deriving an audio signature from audio-video content. The audio signature is searched against reference audio signatures of reference versions of the audio-video content. If a match is found, supplementary data related to the reference audio-video content is retrieved. The search is a directed search using data extracted from the transmission route by which the audio-video content is to be delivered to a consumer. The search process is thereby more efficient as the search may be started at an appropriate location or conducted in an appropriate direction, rather than searching blindly.
Millions of people rely on subtitles when watching video content. The current change in media viewing behaviour involving computers has resulted in a large proportion of people turning to online sources as opposed to regular television for news information. This work analyses the user experience of viewing subtitled news videos presented as part of a web page. A lab-based user experiment was carried out with frequent subtitle users, focusing on determining whether changes in video dimension and subtitle location could affect the user experience attached to viewing subtitled content.
A significant improvement in user experience was seen when changing the subtitle location from the standard position of within a video at the bottom to below the video clip. Additionally, participants responded positively when given the ability to change the position of subtitles in real time, allowing for a more personalised viewing experience. This recommendation for an alternative subtitle positioning that can be controlled by the user is unlike current subtitling practice. It provides evidence that further user-based research examining subtitle usage outside of the traditional television interface is required.
The Internet has continued to evolve, becoming increasingly media rich. It is now a major platform for video content, which is available to a variety of users across a range of devices. Subtitles enhance this experience for many users. However, subtitling techniques are still based on early television systems, which impose limitations on font type, size and line length. These are no longer appropriate in the context of a modern web-based culture.
In this paper we describe a new approach to displaying subtitles alongside the video content. This follows the responsive web design paradigm enabling subtitles to be formatted appropriately for different devices whilst respecting the requirements and preferences of the viewer. We present a prototype responsive video player, and report initial results from a study to evaluate the value perceived by regular subtitle users.
As broadcasters’ web sites become more media
rich it would be prohibitively expensive to manually caption
all of the videos provided. However, many of these videos
have been clipped from broadcast television and would have
been captioned at the point of broadcast.
The recent FCC ruling requires all broadcasters to
provide closed captions for all ‘straight lift’ video clips that
have been broadcast on television from January 2016. From
January 2017 captions will be required for ‘Montages’
which consist of multiple clips, and the requirement to
caption clips from live or near-live television will apply from
July 2017.
This paper presents a method of automatically
finding a match for a video clip from within a set of off-air
television recordings. It then shows how the required set of
captions can be autonomously identified, retimed and
reformatted for use with IP-delivery. It also shows how
captions can be retrieved for each sub-clip within a montage
and combined to create a set of captions. Finally it describes
how, with a modest amount of human intervention, live
captions can be corrected for errors and timing to provide
improved captions for video clips presented on the web.
Analysts wish to explore different hypotheses, organize their thoughts into visual narratives and present their findings. Some developers have used algorithms to ascertain key events from their data, while others have visualized different states of their exploration and utilized free-form canvases to enable the users to develop their thoughts. What is required is a visual layout strategy that summarizes specific events and allows users to layout the story in a structured way. We propose the use of the concept of ‘storyboarding’ for visual analytics. In film production, storyboarding techniques enable film directors and those working on the film to previsualize the shots and evaluate potential problems. We present six principles of storyboarding for visual analytics: composition, viewpoints, transition, annotability, interactivity and separability. We use these principles to develop epSpread, which we apply to VAST Challenge 2011 microblogging data set and to Twitter data from the 2012 Olympic Games. We present technical challenges and design decisions for developing the epSpread storyboarding visual analytics tool that demonstrate the effectiveness of our design and discuss lessons learnt with the storyboarding method.
In vascular interventional radiology, procedures generally start with the Seldinger technique to access the vasculature, using a needle through which a guidewire is inserted, followed by navigation of catheters within the vessels. Visual and tactile skills are learnt in a patient apprenticeship which is expensive and risky for patients. We propose a training alternative through a new virtual simulator supporting the Seldinger technique: ImaGiNe (imaging guided interventional needle) Seldinger. It is composed of two workstations: (1) a simulated pulse is palpated, in an immersive environment, to guide needle puncture and (2) two haptic devices provide a novel interface where a needle can direct a guidewire and catheter within the vessel lumen, using virtual fluoroscopy. Different complexities are provided by 28 real patient datasets. The feel of the simulation is enhanced by replicating, with the haptics, real force and flexibility measurements. A preliminary validation study has demonstrated training effectiveness for skills transfer.
This paper introduces a novel technique for the visualization of blood (or other fluid) flowing through a complex 3D network of vessels. The Directed Particle System (DPS) approach is loosely based on the computer graphics concept of flocking agents. It has been developed and optimised to provide effective real time visualization and qualitative simulation of fluid flow. There are many potential applications of DPS, and one example - a decision support tool for coronary collateralization - is discussed.
We present epSpread, an analysis and storyboarding tool for geolocated microblogging data. Individual time points and ranges are analysed through queries, heatmaps, word clouds and streamgraphs. The underlying narrative is shown on a storyboard-style timeline for discussion, refinement and presentation. The tool was used to analyse data from the VAST Challenge 2011 Mini- Challenge 1, tracking the spread of an epidemic using microblogging data. In this article we describe how the tool was used to identify the origin and track the spread of the epidemic.
In this poster we present a portable and easy to calibrate 3D tabletop display enabling easy understanding of complex datasets and simulations by providing a visualization environment with natural interaction. This table enables users to interact with complex datasets in a natural way and engenders group and collaborative interaction. We also demonstrate the use of the display with an example environmental visualization tool, designed for stake holder engagement.
Osteoarthritis of the hip is commonly caused by the repetitive contact between abnormal skeletal prominences between the anterosuperior femoral head-neck junction and the rim of the acetabular socket. Current methods for estimating femoroacetabular impingement by analyzing the sphericity of the femoral head require manual measurements which are both inaccurate and open to interpretation. In this research we provide a prototype software tool for improving this estimation.
Since the release of the motion picture ’Minority Report’ in 2002, which depicts Tom Cruise interacting with a video display using only hand gestures, there has been a significant interest in the development of intelligent display technology, that users are able to interact with using gestures. In the real world it is common place for us to use gestures and body language to re-enforce our communication. It therefore becomes very natural for us to want to interact with our virtual media in the same way. Traditional methods for pose recognition involve using cameras to track the position of the user. However this can be very challenging to complete acurately in a variety of environments where camera can become occluded or the lighting conditions can change. In this research we prototyped a 3D tabletop display and explored the Kinect game controller as a possible solution to tracking the pose and gesture of a user whilst interacting with our display.
Within the limits of current technology, many applications of a virtual environment will trade-off accuracy for speed. This is not an acceptable compromise in a medical training application where both are essential. Efficient algorithms must therefore be developed. The purpose of this project is the development and validation of a novel physics-based real time tool manipulation model, which is easy to integrate into any medical virtual environment that requires support for the insertion of long flexible tools into complex geometries. This encompasses medical specialities such as vascular interventional radiology, endoscopy, and laparoscopy, where training, prototyping of new instruments/tools and mission rehearsal can all be facilitated by using an immersive medical virtual environment. Our model recognises and uses accurately patient specific data and adapts to the geometrical complexity of the vessel in real time.
Interventional Radiology (IR) provides a minimally invasive method for accessing vessels and organs as an alternative to traditional open surgery. By manipulating coaxially a catheter and guidewire through the vascular system, a range of pathologies can be treated from within the vessel themselves.
The Seldinger technique [1] focuses on the initial step of gaining access to a vessel, by means of a needle puncture into an artery. After identifying that the needle is within the vessel, by a flow of blood from the hub of the needle, a guidewire is then passed through the needle into the vessel. Both tactile feedback and fluoroscopy (real time x-ray imaging) are used to guide the wire into a suitable position within the vessel. Finally the needle is removed, whilst applying pressure to the vessel to stem the bleeding, and the guidewire is left in place to act as a conduit for the catheter.
In collaboration with other groups in the UK (the CraIVE consortium) we have developed a simulator for training the steps of the Seldinger technique [2]. It uses segmented 3D vascular data from real patients [3] and the measured properties of the instruments [4] in order to provide a physically correct virtual environment.
In order to provide a tactile real world interface into the virtual environment, two hardware devices were used. Firstly a proprietary VSP interface (Vascular Simulation Platform, from Mentice, Sweden) was used to track the position and rotation of the guidewire and catheter coaxially as well as the depth and rotation of the needle, as shown in figure 3. Secondly a 'HapticNeedle' interface (UK Patent Application Number: 1001399.3, European Patent Application Number: PCT/EP2010/ 066489) was developed at Bangor University, in order to allow the trainee to insert and manipulate the orientation of the physical needle. The two devices were coupled together with a guide tube, transferring the instruments from the 'HapticNeedle' into the VSP. The construction of this interface is described in this paper.
Haptics technologies are frequently used in virtual environments to allow participants to touch virtual objects. Medical applications are no exception and a wide variety of commercial and bespoke haptics hardware solutions have been employed to aid in the simulation of medical procedures. Intuitively the use of haptics will improve the training of the task. However, little evidence has been published to prove that this is indeed the case. In the paper we summarise the available evidence and use a case study from interventional radiology to discuss the question of how important is it to touch medical virtual environments?
Medical visualization in a hospital can be used to aid training, diagnosis, and pre- and intra-operative planning. In such an application, a virtual representation of a patient is needed that is interactive, can be viewed in three dimensions (3D), and simulates physiological processes that change over time. This paper highlights some of the computational challenges of implementing a real time simulation of a virtual patient, when accuracy can be traded-off against speed. Illustrations are provided using projects from our research based on Grid-based visualization, through to use of the Graphics Processing Unit (GPU).
The particle systems approach is a well known technique in computer graphics for modelling fuzzy objects such as fire and clouds. The algorithm has also been applied to different biomedical applications and this paper presents two such methods: a charged particle method for soft tissue deformation with integrated haptics; and a blood flow visualization technique based on boids. The goal is real time performance with high fidelity results.
Purpose: Commercial interventional radiology vascular simulators emulate instrument navigation and device deployment, though none supports the Seldinger technique, which provides initial access to the vascular tree. This paper presents a novel virtual environment for teaching this core skill.
Methods: Our simulator combines two haptic devices: vessel puncture with a virtual needle and catheter and guidewire manipulation. The simulation software displays the instrument interactions with the vessels. Instruments are modelled using a mass-spring approximation, while efficient collision detection and collision response allow real time interactions.
Results: Experienced interventional radiologists evaluated the haptic components of our simulator as realistic and accurate. The vessel puncture haptic device proposes a first prototype to simulate the Seldinger technique. Our simulator presents realistic instrument behaviour when compared to real instruments in a vascular phantom.
Conclusion: This paper presents the first simulator to train
the Seldinger technique. The preliminary results confirm its
utility for interventional radiology training.
Recent years have seen a significant increase in the use of Interventional Radiology (IR) as an alternative to open surgery. A large number of IR procedures commence with needle puncture of a vessel to insert guidewires and catheters: these clinical skills are acquired by all radiologists during training on patients, associated with some discomfort and occasionally, complications. While some visual skills can be acquired using models such as the ones used in surgery, these have limitations for IR which relies heavily on a sense of touch. Both patients and trainees would benefit from a virtual environment (VE) conveying touch sensation to realistically mimic procedures. The authors are developing a high fidelity VE providing a validated alternative to the traditional apprenticeship model used for teaching the core skills. The current version of the CRaIVE simulator combines home made software, haptic devices and commercial equipments.
Traditionally registration and tracking within Augmented Reality (AR) applications have been built around specific markers which have been added into the user’s viewpoint and allow for their position to be tracked and their orientation to be estimated in real-time. All attempts to implement AR without specific markers have increased the computational requirements and some information about the environment is still needed in order to match the registration between the real world and the virtual artifacts. This thesis describes a novel method that not only provides a generic platform for AR but also seamlessly deploys High Performance Computing (HPC) resources to deal with the additional computational load, as part of the distributed High Performance Visualization (HPV) pipeline used to render the virtual artifacts. The developed AR framework is then applied to a real world application of a marker-less AR interface for Transcranial Magnetic Stimulation (TMS), named BART (Bangor Augmented Reality for TMS).
Three prototypes of BART are presented, along with a discussion of the subsequent limitations and solutions of each. First by using a proprietary tracking system it is possible to achieve accurate tracking, but with the limitations of having to use bold markers and being unable to render the virtual artifacts in real time. Second, BART v2 implements a novel tracking system using computer vision techniques. Repeatable feature points are extracted from the users view point to build a description of the object or plane that the virtual artifact is aligned with. Then as each frame is updated we use the changing position of the feature points to estimate how the object has moved. Third, the e-Viz framework is used to autonomously deploy HPV resources to ensure that the virtual objects are rendered in real-time. e-Viz also enables the allocation of remote High Performance Computing (HPC) resources to handle the computational requirements of the object tracking and pose estimation.
PURPOSE-MATERIALS: To use patient imaging as the basis for developing virtual environments (VE).
BACKGROUND: Interventional radiology basic skills are still taught in an apprenticeship in patients, though these could be learnt in high fidelity simulations using VE. Ideally, imaging data sets for simulation of image-guided procedures would alter dynamically in response to deformation forces such as respiration and needle insertion. We describe a methodology for deriving such dynamic volume rendering from patient imaging data.
METHODS: With patient consent, selected, routine imaging (computed tomography, magnetic resonance, ultrasound) of straightforward and complex anatomy and pathology was anonymised and uploaded to a repository at Bangor University. Computer scientists used interactive segmentation processes to label target anatomy for creation of a surface (triangular) and volume (tetrahedral) mesh. Computer modelling techniques used a mass spring algorithm to map tissue deformations such as needle insertion and intrinsic motion (e.g. respiration). These methods, in conjunction with a haptic device, provide output forces in real time to mimic the ˜feel” of a procedure. Feedback from trainees and practitioners was obtained during preliminary demonstrations.
RESULTS: Data sets were derived from 6 patients and converted into deformable VEs. Preliminary content validation studies of a framework developed for training on liver biopsy procedures, demonstrated favourable observations that are leading to further revisions, including implementation of an immersive VE.
CONCLUSION: It is possible to develop dynamic volume renderings from static patient data sets and these are likely to form the basis of future simulations for IR training of procedural interventions.
There is a shortage of radiologists trained in performance of Interventional radiology (IR) procedures. Visceral and vascular IR techniques almost universally commence with a needle puncture, usually to a specific target for biopsy, or to introduce wires and catheters for diagnosis or treatment. These skills are learnt in an apprenticeship in simple diagnostic procedures in patients, though there are drawbacks to this training method. In addition, certification depends partly on a record of the number of procedures performed, with no current method of objective IR skills assessment.
Despite the presence of an effective mentor, the apprenticeship method of training presents some risks to patients: these could be mitigated in a pre-patient training curriculum, which would use simulation to provide skills training.
Recent hardware and software advances have demonstrated that it is now practicable to run large visual computing tasks over heterogeneous hardware with output on multiple types of display devices. As the complexity of the enabling infrastructure increases, then so too do the demands upon the programmer for task integration as well as the demands upon the users of the system. This places importance on system developers to create systems that reduce these demands. Such a goal is an important factor of autonomic computing, aspects of which we have used to influence our work. In this paper we develop a model of adaptive infrastructure for visual systems. We design and implement a simulation engine for visual tasks in order to allow a system to inspect and adapt itself to optimise usage of the underlying infrastructure. We present a formal abstract representation of the visualization pipeline, from which a user interface can be generated automatically, along with concrete pipelines for the visualization. By using this abstract representation it is possible for the system to adapt at run time. We demonstrate the need for, and the technical feasibility of, the system using several example applications.
Although desktop graphical capabilities continually improve,
visualization at interactive frame rates remains a problem for very large datasets or complex rendering algorithms. This is particularly evident in scientific visualization, (e.g., medical data or simulation of fluid dynamics), where high-performance computing facilities organised in a distributed infrastructure need to be used to achieve reasonable rendering times. Such distributed visualization systems are required to be increasingly flexible; they need to be able to integrate heterogeneous hardware (both for rendering and display), span different networks, easily reuse existing software, and present user interfaces appropriate to the task (both single user and collaborative use). Current complex distributed software systems tend to be hard to administrate and debug, and tend to respond poorly to faults (hardware or software).
In recognition of the increasing complexity of general computing systems (not specifically visualization), IBM have suggested the Autonomic Computing approach to enable
self-management through the means of self-configuration, self-optimisation, self-healing and self-protection.
Transcranial Magnetic Stimulation (TMS) is the process in which electrical activity in the brain is influenced by a pulsed magnetic field. Common practice is to align an electromagnetic coil with points of interest identified on the surface of the brain, from an MRI scan of the subject. The coil can be tracked using optical sensors, enabling the targeting information to be calculated and displayed on a local workstation. In this paper we explore the hypothesis that using an Augmented Reality (AR) interface for TMS will improve the efficiency of carrying out the procedure. We also aim to provide a flexible infrastructure that if required, can seamlessly deploy processing power from a remote high performance computing resource.
Traditionally registration and tracking within Augmented Reality (AR) applications have been built around limited bold markers, which allow for their orientation to be estimated in real-time. All attempts to implement AR without specific markers have increased the computational requirements and some information about the environment is still needed. In this paper we describe a method that not only provides a generic platform for AR but also seamlessly deploys High Performance Computing (HPC) resources to deal with the additional computational load, as part of the distributed High Performance Visualization (HPV) pipeline used to render the virtual artifacts. Repeatable feature points are extracted from known views of a real object and then we match the best stored view to the users viewpoint using the matched feature points to estimate the objects pose. We also show how our AR framework can then be used in the real world by presenting a markerless AR interface for Transcranial Magnetic Stimulation (TMS).
Existing Grid visualization systems typically focus on the distribution onto remote machines of some or all of the processes encompassing the visualization pipeline, with the aim of increasing the maximum data size, achievable frame rates or display resolution. Such systems may rely on a particular piece of visualization software, and require that the end users have some degree of knowledge in its use, and in the concepts of the Grid itself. This paper describes an architecture for Grid visualization that abstracts away from the underlying hardware and software, and presents the user with a generic interface to a range of visualization technologies, switching between hardware and software to best meet the requirements of that user. We assess the difficulties involved in creating such a system, such as selecting appropriate visualization pipelines, deciding how to distribute the processing between machines, scheduling jobs using Grid middleware, and creating a flexible abstract description language for visualization. Finally, we describle a prototype implementation of such a system, and consider to what degree it might meet the requirements of real world visualization users.
If we were to have a Grid infrastructure for visualization, what technologies would be needed to build such an infrastructure, what kind of applications would benefit from it, and what challenges are we facing in order to accomplish this goal? In this survey paper, we make use of the term ‘visual supercomputing’ to encapsulate a subject domain concerning the infrastructural technology for visualization. We consider a broad range of scientific and technological advances in computer graphics and visualization, which are relevant to visual supercomputing. We identify the state‐of‐the‐art technologies that have prepared us for building such an infrastructure. We examine a collection of applications that would benefit enormously from such an infrastructure, and discuss their technical requirements. We propose a set of challenges that may guide our strategic efforts in the coming years.
I like to run
I have an open door policy - You are welcome to come and find me at anytime that I am available. You can see my current availability on the calendar below or contact me to confirm a meeting.
My office is Room 215 located in the Newton Building on Salford University main campus.
Text