Research Projects

  • image

    Immersive Accesibility

  • image

    Data Stream Mining

  • image

    Quantified Self

  • image

    Responsive Subtitles

  • image

    Caption Realignment

  • image

    Haptic Video

  • image

    Social Media

  • image

    3D Tabletop Display

  • image

    Information Visualization

  • image

    Macroscopic Bloodflow

  • image

    Estimating Femoroacetabular Impingement

  • image

    Seldinger Technique Simulation

  • image

    Markerless Augmented Reality

  • image

    HPC Augmented Reality

  • image

    High Performance Visualization

Filter by type:

Sort by year:

Disruptive approaches for subtitling in immersive environments

Hughes, CJ, Montagud, M & tho Pesch, P
Conference PaperProceedings of TVX 2019 (2019)

Abstract

The Immersive Accessibility Project (ImAc) explores how accessibility services can be integrated with 360o video as well as new methods for enabling universal access to immersive content. ImAc is focused on inclusivity and addresses the needs of all users, including those with sensory or learning disabilities, of all ages and considers language and user preferences. The project focuses on moving away from the constraints of existing technologies and explores new methods for creating a personal experience for each consumer. It is not good enough to simply retrofit subtitles into immersive content: this paper attempts to disrupt the industry with new and often controversial methods.

This paper provides an overview of the ImAc project and proposes guiding methods for subtitling in immersive environments. We discuss the current state-of-the-art for subtitling in immersive environments and the rendering of subtitles in the user interface within the ImAc project. We then discuss new experimental rendering modes that have been implemented including a responsive subtitle approach, which dynamically re-blocks subtitles to fit the available space and explore alternative rendering techniques where the subtitles are attached to the scene.

Analyzing data streams using a dynamic compact stream pattern algorithm

Oyewale, A, Hughes, CJ & Saraee, MH
ArticleInternational Journal Of Scientific And Technical Research In Engineering (2019)

Abstract

A growing number of applications that generate massive streams of data need intelligent data processing and online analysis. Data & Knowledge Engineering (DKE) has been known to stimulate the exchange of ideas and interaction between these two related fields of interest. DKE makes it possible to understand, apply and assess knowledge and skills required for the development and application data mining systems. With present technology, companies are able to collect vast amounts of data with relative ease. With no hesitation, many companies now have more data than they can handle. A vital portion of this data entails large unstructured data sets which amount up to 90 percent of an organization’s data. With data quantities growing steadily, the explosion of data is putting a strain on infrastructures as diverse companies having to increase their data center capacity with more servers and storages. This study conceptualized handling enormous data as a stream mining problem that applies to continuous data stream and proposes an ensemble of unsupervised learning methods for efficiently detecting anomalies in stream data.

Analyzing data streams using a dynamic compact stream pattern algorithm

Oyewale, A, Hughes, CJ & Saraee, MH
Conference PaperProceedings, The Eighth International Conference on Advances in Information Mining and Management (IMMM 2018) (2018)

Abstract

In order to succeed in the global competition, organizations need to understand and monitor the rate of data influx. The acquisition of continuous data has been extremely outstretched as a concern in many fields. Recently, frequent patterns in data streams have been a challenging task in the field of data mining and knowledge discovery. Most of these datasets generated are in the form of a stream (stream data), thereby posing a challenge of being continuous. Therefore, the process of extracting knowledge structures from continuous rapid data records is termed as stream mining. This study conceptualizes the process of detecting outliers and responding to stream data. This is done by proposing a Compressed Stream Pattern algorithm, which dynamically generates a frequency descending prefix tree structure with only a singlepass over the data. We show that applying tree restructuring techniques can considerably minimize the mining time on various datasets.

Understanding the diverse needs of subtitle users in a rapidly evolving media landscape

Armstrong, Mike, Brown, Andy, Crabb, Michael, Hughes, CJ, Jones, Rhianne & Sandford, James
ArticleSMPTE Motion Imaging Journal (2017)

Abstract

Audiences are increasingly using services, such as video on demand and the Web, to watch television programs. Broadcasters need to make subtitles available across all these new platforms. These platforms also create new design opportunities for subtitles along with the ability to customize them to an individual’s needs. To explore these new opportunities for subtitles, we have begun the process of reviewing the guidance for subtitles on television and evaluating the original user research. We have found that existing guidelines have been shaped by a mixture of technical constraints, industry practice, and user research, constrained by existing technical standards. This paper provides an overview of the subtitle research at BBC R&D over the past two years. Our research is revealing significant diversity in the needs and preferences of frequent subtitle users, and points to the need for personalization in the way subtitles are displayed. We are developing a new approach to the authoring and display of subtitles that can respond to the user requirements by adjusting the subtitle layout on the client device.

Subtitling method and system

Hughes, CJ & Armstrong, M
Patent (2016)

Abstract

A method of retrieving supplementary data related to audio-video content with reference to a reference version of that content involves deriving an audio signature from audio-video content. The audio signature is searched against reference audio signatures of reference versions of the audio-video content. If a match is found, supplementary data related to the reference audio-video content is retrieved. The search is a directed search using data extracted from the transmission route by which the audio-video content is to be delivered to a consumer. The search process is thereby more efficient as the search may be started at an appropriate location or conducted in an appropriate direction, rather than searching blindly.

Online news videos : the UX of subtitle position

Crabb, M, Jones, R, Armstrong, M & Hughes, CJ
Conference PaperProceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (2015)

Abstract

Millions of people rely on subtitles when watching video content. The current change in media viewing behaviour involving computers has resulted in a large proportion of people turning to online sources as opposed to regular television for news information. This work analyses the user experience of viewing subtitled news videos presented as part of a web page. A lab-based user experiment was carried out with frequent subtitle users, focusing on determining whether changes in video dimension and subtitle location could affect the user experience attached to viewing subtitled content.

A significant improvement in user experience was seen when changing the subtitle location from the standard position of within a video at the bottom to below the video clip. Additionally, participants responded positively when given the ability to change the position of subtitles in real time, allowing for a more personalised viewing experience. This recommendation for an alternative subtitle positioning that can be controlled by the user is unlike current subtitling practice. It provides evidence that further user-based research examining subtitle usage outside of the traditional television interface is required.

Responsive design for personalised subtitles

Hughes, CJ, Armstrong, M, Jones, R & Crabb, M
Conference PaperProceedings of the 12th Web for All Conference (2015)

Abstract

The Internet has continued to evolve, becoming increasingly media rich. It is now a major platform for video content, which is available to a variety of users across a range of devices. Subtitles enhance this experience for many users. However, subtitling techniques are still based on early television systems, which impose limitations on font type, size and line length. These are no longer appropriate in the context of a modern web-based culture.

In this paper we describe a new approach to displaying subtitles alongside the video content. This follows the responsive web design paradigm enabling subtitles to be formatted appropriately for different devices whilst respecting the requirements and preferences of the viewer. We present a prototype responsive video player, and report initial results from a study to evaluate the value perceived by regular subtitle users.

Automatic retrieval of closed captions for web clips from broadcast TV content

Hughes, CJ & Armstrong, M
Conference PaperNAB Broadcast Engineering Conference Proceedings (2015)

Abstract

As broadcasters’ web sites become more media rich it would be prohibitively expensive to manually caption all of the videos provided. However, many of these videos have been clipped from broadcast television and would have been captioned at the point of broadcast.

The recent FCC ruling requires all broadcasters to provide closed captions for all ‘straight lift’ video clips that have been broadcast on television from January 2016. From January 2017 captions will be required for ‘Montages’ which consist of multiple clips, and the requirement to caption clips from live or near-live television will apply from July 2017.

This paper presents a method of automatically finding a match for a video clip from within a set of off-air television recordings. It then shows how the required set of captions can be autonomously identified, retimed and reformatted for use with IP-delivery. It also shows how captions can be retrieved for each sub-clip within a montage and combined to create a set of captions. Finally it describes how, with a modest amount of human intervention, live captions can be corrected for errors and timing to provide improved captions for video clips presented on the web.

Storyboarding for visual analytics

Walker, R, Cenydd, L, Pop, S, Miles, H, Hughes, CJ, Teahan, W & Roberts, J
ArticleInformation Visualization (2015)

Abstract

Analysts wish to explore different hypotheses, organize their thoughts into visual narratives and present their findings. Some developers have used algorithms to ascertain key events from their data, while others have visualized different states of their exploration and utilized free-form canvases to enable the users to develop their thoughts. What is required is a visual layout strategy that summarizes specific events and allows users to layout the story in a structured way. We propose the use of the concept of ‘storyboarding’ for visual analytics. In film production, storyboarding techniques enable film directors and those working on the film to previsualize the shots and evaluate potential problems. We present six principles of storyboarding for visual analytics: composition, viewpoints, transition, annotability, interactivity and separability. We use these principles to develop epSpread, which we apply to VAST Challenge 2011 microblogging data set and to Twitter data from the 2012 Olympic Games. We present technical challenges and design decisions for developing the epSpread storyboarding visual analytics tool that demonstrate the effectiveness of our design and discuss lessons learnt with the storyboarding method.

ImaGiNe Seldinger : first simulator for Seldinger technique and angiography training

Luboz, V, Zhang, Y, Johnson, S, Song, Y, Kilkenny, C, Hunt, C, Woolnough, H, Guediri, S, Zhai, J, Odetoyinbo, T, Littler, P, Fisher, A, Hughes, CJ, Chalmers, N, Kessel, D, Clough, PJ, Ward, J, Phillips, R, How, T, Bulpitt, A, John, NW, Bello, F & Gould, D
ArticleComputer Methods and Programs in Biomedicine (2013)

Abstract

In vascular interventional radiology, procedures generally start with the Seldinger technique to access the vasculature, using a needle through which a guidewire is inserted, followed by navigation of catheters within the vessels. Visual and tactile skills are learnt in a patient apprenticeship which is expensive and risky for patients. We propose a training alternative through a new virtual simulator supporting the Seldinger technique: ImaGiNe (imaging guided interventional needle) Seldinger. It is composed of two workstations: (1) a simulated pulse is palpated, in an immersive environment, to guide needle puncture and (2) two haptic devices provide a novel interface where a needle can direct a guidewire and catheter within the vessel lumen, using virtual fluoroscopy. Different complexities are provided by 28 real patient datasets. The feel of the simulation is enhanced by replicating, with the haptics, real force and flexibility measurements. A preliminary validation study has demonstrated training effectiveness for skills transfer.

A directed particle system for optimised visualization of blood flow in complex networks

Pop, SR, Hughes, CJ, Ap Cenydd, L & John, NW
ArticleStudies in Health Technology and Informatics : Medicine Meets Virtual Reality 20 (2013)

Abstract

This paper introduces a novel technique for the visualization of blood (or other fluid) flowing through a complex 3D network of vessels. The Directed Particle System (DPS) approach is loosely based on the computer graphics concept of flocking agents. It has been developed and optimised to provide effective real time visualization and qualitative simulation of fluid flow. There are many potential applications of DPS, and one example - a decision support tool for coronary collateralization - is discussed.

Haptic needle as part of medical training simulator

Hughes, C & John, N
Patent (2012)

Abstract

epSpread - Storyboarding for visual analytics

Cenydd, L, Walker, R, Pop, S, Miles, H, Hughes, C, Teahan, W & Roberts, J
Conference PaperIEEE Conference on Visual Analytics Science and Technology (VAST) (2011)

Abstract

We present epSpread, an analysis and storyboarding tool for geolocated microblogging data. Individual time points and ranges are analysed through queries, heatmaps, word clouds and streamgraphs. The underlying narrative is shown on a storyboard-style timeline for discussion, refinement and presentation. The tool was used to analyse data from the VAST Challenge 2011 Mini- Challenge 1, tracking the spread of an epidemic using microblogging data. In this article we describe how the tool was used to identify the origin and track the spread of the epidemic.

Calibrating the Kinect with a 3D projector to create a tangible tabletop interface

Hughes, CJ, Sinclair, F, Pagella, T & Roberts, J
Conference Paper2011 Joint Virtual Reality Conference (EuroVR-EGVE) (2011)

Abstract

In this poster we present a portable and easy to calibrate 3D tabletop display enabling easy understanding of complex datasets and simulations by providing a visualization environment with natural interaction. This table enables users to interact with complex datasets in a natural way and engenders group and collaborative interaction. We also demonstrate the use of the display with an example environmental visualization tool, designed for stake holder engagement.

3D measuring tool for estimating femoroacetabular impingement.

Hughes, CJ & John, NW
ArticleStudies in Health Technology and Informatics : Medicine Meets Virtual Reality 20 (2011)

Abstract

Osteoarthritis of the hip is commonly caused by the repetitive contact between abnormal skeletal prominences between the anterosuperior femoral head-neck junction and the rim of the acetabular socket. Current methods for estimating femoroacetabular impingement by analyzing the sphericity of the femoral head require manual measurements which are both inaccurate and open to interpretation. In this research we provide a prototype software tool for improving this estimation.

Using a Kinect interface to develop an interactive 3D tabletop display

Cenydd, L, Hughes, CJ, Roberts, JC & Walker, R
Conference PaperEurographics 2011-Posters (2011)

Abstract

Since the release of the motion picture ’Minority Report’ in 2002, which depicts Tom Cruise interacting with a video display using only hand gestures, there has been a significant interest in the development of intelligent display technology, that users are able to interact with using gestures. In the real world it is common place for us to use gestures and body language to re-enforce our communication. It therefore becomes very natural for us to want to interact with our virtual media in the same way. Traditional methods for pose recognition involve using cameras to track the position of the user. However this can be very challenging to complete acurately in a variety of environments where camera can become occluded or the lighting conditions can change. In this research we prototyped a 3D tabletop display and explored the Kinect game controller as a possible solution to tracking the pose and gesture of a user whilst interacting with our display.

A model for flexible tools used in minimally invasive medical virtual environments

Soler, F, Luzon, MV, Pop, SR, Hughes, CJ, John, NW & Torres, JC
ArticleStudies in Health Technology and Informatics : Medicine Meets Virtual Reality 18 (2011)

Abstract

Within the limits of current technology, many applications of a virtual environment will trade-off accuracy for speed. This is not an acceptable compromise in a medical training application where both are essential. Efficient algorithms must therefore be developed. The purpose of this project is the development and validation of a novel physics-based real time tool manipulation model, which is easy to integrate into any medical virtual environment that requires support for the insertion of long flexible tools into complex geometries. This encompasses medical specialities such as vascular interventional radiology, endoscopy, and laparoscopy, where training, prototyping of new instruments/tools and mission rehearsal can all be facilitated by using an immersive medical virtual environment. Our model recognises and uses accurately patient specific data and adapts to the geometrical complexity of the vessel in real time.

A robotic needle interface for interventional radiology training

Hughes, CJ & John, N
Conference PaperProceedings, The 4th Hamlyn Symposium for Medical Robotics (2011)

Abstract

Interventional Radiology (IR) provides a minimally invasive method for accessing vessels and organs as an alternative to traditional open surgery. By manipulating coaxially a catheter and guidewire through the vascular system, a range of pathologies can be treated from within the vessel themselves.

The Seldinger technique [1] focuses on the initial step of gaining access to a vessel, by means of a needle puncture into an artery. After identifying that the needle is within the vessel, by a flow of blood from the hub of the needle, a guidewire is then passed through the needle into the vessel. Both tactile feedback and fluoroscopy (real time x-ray imaging) are used to guide the wire into a suitable position within the vessel. Finally the needle is removed, whilst applying pressure to the vessel to stem the bleeding, and the guidewire is left in place to act as a conduit for the catheter.

In collaboration with other groups in the UK (the CraIVE consortium) we have developed a simulator for training the steps of the Seldinger technique [2]. It uses segmented 3D vascular data from real patients [3] and the measured properties of the instruments [4] in order to provide a physically correct virtual environment.

In order to provide a tactile real world interface into the virtual environment, two hardware devices were used. Firstly a proprietary VSP interface (Vascular Simulation Platform, from Mentice, Sweden) was used to track the position and rotation of the guidewire and catheter coaxially as well as the depth and rotation of the needle, as shown in figure 3. Secondly a 'HapticNeedle' interface (UK Patent Application Number: 1001399.3, European Patent Application Number: PCT/EP2010/ 066489) was developed at Bangor University, in order to allow the trainee to insert and manipulate the orientation of the physical needle. The two devices were coupled together with a guide tube, transferring the instruments from the 'HapticNeedle' into the VSP. The construction of this interface is described in this paper.

The need to touch medical virtual environments?

Bello, F, Coles, T, Gould, D, Hughes, CJ, John, N, Vidal, F & Watt, S
Conference PaperWorkshop of Medical Virtual Environments at IEEE VR 2010 (2010)

Abstract

Haptics technologies are frequently used in virtual environments to allow participants to touch virtual objects. Medical applications are no exception and a wide variety of commercial and bespoke haptics hardware solutions have been employed to aid in the simulation of medical procedures. Intuitively the use of haptics will improve the training of the task. However, little evidence has been published to prove that this is indeed the case. In the paper we summarise the available evidence and use a case study from interventional radiology to discuss the question of how important is it to touch medical virtual environments?

Computational requirements of the virtual patient

John, N, Hughes, CJ, Pop, S, Vidal, F & Buckley, O
Conference Paper1st International Conference on Mathematical and Computational Biomedical Engineering (2009)

Abstract

Medical visualization in a hospital can be used to aid training, diagnosis, and pre- and intra-operative planning. In such an application, a virtual representation of a patient is needed that is interactive, can be viewed in three dimensions (3D), and simulates physiological processes that change over time. This paper highlights some of the computational challenges of implementing a real time simulation of a virtual patient, when accuracy can be traded-off against speed. Illustrations are provided using projects from our research based on Grid-based visualization, through to use of the Graphics Processing Unit (GPU).

Particle methods for a virtual patient

Buckley, O, Hughes, CJ, John, N & Pop, S
Conference Paper1st International Conference on Mathematical and Computational Biomedical Engineering (2009)

Abstract

The particle systems approach is a well known technique in computer graphics for modelling fuzzy objects such as fire and clouds. The algorithm has also been applied to different biomedical applications and this paper presents two such methods: a charged particle method for soft tissue deformation with integrated haptics; and a blood flow visualization technique based on boids. The goal is real time performance with high fidelity results.

Real-time Seldinger technique simulation in complex vascular models

Luboz, V, Hughes, CJ, Gould, D, John, N & Bello, F
ArticleInternational Journal of Computer Assisted Radiology and Surgery (2009)

Abstract

Purpose: Commercial interventional radiology vascular simulators emulate instrument navigation and device deployment, though none supports the Seldinger technique, which provides initial access to the vascular tree. This paper presents a novel virtual environment for teaching this core skill.
Methods: Our simulator combines two haptic devices: vessel puncture with a virtual needle and catheter and guidewire manipulation. The simulation software displays the instrument interactions with the vessels. Instruments are modelled using a mass-spring approximation, while efficient collision detection and collision response allow real time interactions.
Results: Experienced interventional radiologists evaluated the haptic components of our simulator as realistic and accurate. The vessel puncture haptic device proposes a first prototype to simulate the Seldinger technique. Our simulator presents realistic instrument behaviour when compared to real instruments in a vascular phantom.
Conclusion: This paper presents the first simulator to train the Seldinger technique. The preliminary results confirm its utility for interventional radiology training.

Physics-based virtual environment for training core skills in vascular interventional radiological procedures

John, NW, Luboz, V, Bello, F, Hughes, CJ, Vidal, FP, Lim, IS, How, TV, Zhai, J, Johnson, S, Chalmers, N, Brodlie, K, Bulpit, A, Song, Y, Kessel, DO, Philips, R, Ward, JW, Pisharody, S, Zhang, Y, Crawsgaw, CM & Gould, DA
ArticleStudies in Health Technology and Informatics : Medicine Meets Virtual Reality 16 (2008)

Abstract

Recent years have seen a significant increase in the use of Interventional Radiology (IR) as an alternative to open surgery. A large number of IR procedures commence with needle puncture of a vessel to insert guidewires and catheters: these clinical skills are acquired by all radiologists during training on patients, associated with some discomfort and occasionally, complications. While some visual skills can be acquired using models such as the ones used in surgery, these have limitations for IR which relies heavily on a sense of touch. Both patients and trainees would benefit from a virtual environment (VE) conveying touch sensation to realistically mimic procedures. The authors are developing a high fidelity VE providing a validated alternative to the traditional apprenticeship model used for teaching the core skills. The current version of the CRaIVE simulator combines home made software, haptic devices and commercial equipments.

Utilising the grid for augmented reality

Hughes, CJ
Unknown (2008)

Abstract

Traditionally registration and tracking within Augmented Reality (AR) applications have been built around specific markers which have been added into the user’s viewpoint and allow for their position to be tracked and their orientation to be estimated in real-time. All attempts to implement AR without specific markers have increased the computational requirements and some information about the environment is still needed in order to match the registration between the real world and the virtual artifacts. This thesis describes a novel method that not only provides a generic platform for AR but also seamlessly deploys High Performance Computing (HPC) resources to deal with the additional computational load, as part of the distributed High Performance Visualization (HPV) pipeline used to render the virtual artifacts. The developed AR framework is then applied to a real world application of a marker-less AR interface for Transcranial Magnetic Stimulation (TMS), named BART (Bangor Augmented Reality for TMS).

Three prototypes of BART are presented, along with a discussion of the subsequent limitations and solutions of each. First by using a proprietary tracking system it is possible to achieve accurate tracking, but with the limitations of having to use bold markers and being unable to render the virtual artifacts in real time. Second, BART v2 implements a novel tracking system using computer vision techniques. Repeatable feature points are extracted from the users view point to build a description of the object or plane that the virtual artifact is aligned with. Then as each frame is updated we use the changing position of the feature points to estimate how the object has moved. Third, the e-Viz framework is used to autonomously deploy HPV resources to ensure that the virtual objects are rendered in real-time. e-Viz also enables the allocation of remote High Performance Computing (HPC) resources to handle the computational requirements of the object tracking and pose estimation.

Improving the modeling of medical imaging data for simulation

Villard, P, Littler, P, Gough, V, Vidal, F, Hughes, CJ, John, N, Luboz, V, Bello, F, Song, Y, Holbrey, R, Bulpitt, A, Mullan, D, Chalmers, N, Kessel, D & Gould, D
Conference PaperProceedings of United Kingdom Radiology Congress (UKRC) (2008)

Abstract

PURPOSE-MATERIALS: To use patient imaging as the basis for developing virtual environments (VE).
BACKGROUND: Interventional radiology basic skills are still taught in an apprenticeship in patients, though these could be learnt in high fidelity simulations using VE. Ideally, imaging data sets for simulation of image-guided procedures would alter dynamically in response to deformation forces such as respiration and needle insertion. We describe a methodology for deriving such dynamic volume rendering from patient imaging data.
METHODS: With patient consent, selected, routine imaging (computed tomography, magnetic resonance, ultrasound) of straightforward and complex anatomy and pathology was anonymised and uploaded to a repository at Bangor University. Computer scientists used interactive segmentation processes to label target anatomy for creation of a surface (triangular) and volume (tetrahedral) mesh. Computer modelling techniques used a mass spring algorithm to map tissue deformations such as needle insertion and intrinsic motion (e.g. respiration). These methods, in conjunction with a haptic device, provide output forces in real time to mimic the ˜feel” of a procedure. Feedback from trainees and practitioners was obtained during preliminary demonstrations.
RESULTS: Data sets were derived from 6 patients and converted into deformable VEs. Preliminary content validation studies of a framework developed for training on liver biopsy procedures, demonstrated favourable observations that are leading to further revisions, including implementation of an immersive VE.
CONCLUSION: It is possible to develop dynamic volume renderings from static patient data sets and these are likely to form the basis of future simulations for IR training of procedural interventions.

Interventional radiology core skills simulation : mid term status of the CRaIVE projects

Gould, D, Vidal, F, Hughes, CJ, Villard, P, Luboz, V, John, N, Bello, F, Bulpitt, A, Gough, V & Kessel, D
Conference PaperCardiovascular and Interventional Radiological Society of Europe (CIRSE) (2008)

Abstract

There is a shortage of radiologists trained in performance of Interventional radiology (IR) procedures. Visceral and vascular IR techniques almost universally commence with a needle puncture, usually to a specific target for biopsy, or to introduce wires and catheters for diagnosis or treatment. These skills are learnt in an apprenticeship in simple diagnostic procedures in patients, though there are drawbacks to this training method. In addition, certification depends partly on a record of the number of procedures performed, with no current method of objective IR skills assessment.

Despite the presence of an effective mentor, the apprenticeship method of training presents some risks to patients: these could be mitigated in a pre-patient training curriculum, which would use simulation to provide skills training.

Adaptive infrastructure for visual computing

Brodlie, K, Brooke, J, Chen, M, Chisnall, D, Hughes, C, John, N, Jones, M, Riding, M, Roard, N, Turner, M & Wood, J
ArticleTheory and Practice of Computer Graphics (2007)

Abstract

Recent hardware and software advances have demonstrated that it is now practicable to run large visual computing tasks over heterogeneous hardware with output on multiple types of display devices. As the complexity of the enabling infrastructure increases, then so too do the demands upon the programmer for task integration as well as the demands upon the users of the system. This places importance on system developers to create systems that reduce these demands. Such a goal is an important factor of autonomic computing, aspects of which we have used to influence our work. In this paper we develop a model of adaptive infrastructure for visual systems. We design and implement a simulation engine for visual tasks in order to allow a system to inspect and adapt itself to optimise usage of the underlying infrastructure. We present a formal abstract representation of the visualization pipeline, from which a user interface can be generated automatically, along with concrete pipelines for the visualization. By using this abstract representation it is possible for the system to adapt at run time. We demonstrate the need for, and the technical feasibility of, the system using several example applications.

A framework for adaptive visualization

Brodlie, K, Brooke, J, Chen, M, Chisnall, D, Hughes, C, John, N, Jones, M, Riding, M, Roard, N, Turner, M & Wood, J
Conference PaperIEEE Visualization (2006)

Abstract

Although desktop graphical capabilities continually improve, visualization at interactive frame rates remains a problem for very large datasets or complex rendering algorithms. This is particularly evident in scientific visualization, (e.g., medical data or simulation of fluid dynamics), where high-performance computing facilities organised in a distributed infrastructure need to be used to achieve reasonable rendering times. Such distributed visualization systems are required to be increasingly flexible; they need to be able to integrate heterogeneous hardware (both for rendering and display), span different networks, easily reuse existing software, and present user interfaces appropriate to the task (both single user and collaborative use). Current complex distributed software systems tend to be hard to administrate and debug, and tend to respond poorly to faults (hardware or software).

In recognition of the increasing complexity of general computing systems (not specifically visualization), IBM have suggested the Autonomic Computing approach to enable self-management through the means of self-configuration, self-optimisation, self-healing and self-protection.

A flexible infrastructure for delivering augmented reality enabled transcranial magnetic stimulation

Hughes, CJ & John, NW
ArticleStudies in Health Technology and Informatics (2006)

Abstract

Transcranial Magnetic Stimulation (TMS) is the process in which electrical activity in the brain is influenced by a pulsed magnetic field. Common practice is to align an electromagnetic coil with points of interest identified on the surface of the brain, from an MRI scan of the subject. The coil can be tracked using optical sensors, enabling the targeting information to be calculated and displayed on a local workstation. In this paper we explore the hypothesis that using an Augmented Reality (AR) interface for TMS will improve the efficiency of carrying out the procedure. We also aim to provide a flexible infrastructure that if required, can seamlessly deploy processing power from a remote high performance computing resource.

A generic approach to high performance visualization enabled augmented reality

Hughes, CJ, John, N & Riding, M
Conference PaperProceedings of the UK e-Science All Hands Meeting 2006 (2006)

Abstract

Traditionally registration and tracking within Augmented Reality (AR) applications have been built around limited bold markers, which allow for their orientation to be estimated in real-time. All attempts to implement AR without specific markers have increased the computational requirements and some information about the environment is still needed. In this paper we describe a method that not only provides a generic platform for AR but also seamlessly deploys High Performance Computing (HPC) resources to deal with the additional computational load, as part of the distributed High Performance Visualization (HPV) pipeline used to render the virtual artifacts. Repeatable feature points are extracted from known views of a real object and then we match the best stored view to the users viewpoint using the matched feature points to estimate the objects pose. We also show how our AR framework can then be used in the real world by presenting a markerless AR interface for Transcranial Magnetic Stimulation (TMS).

eViz : towards an integrated framework for high performance visualization

Riding, M, Wood, J, Brodlie, K, Brooke, J, Chen, M, Chisnal, D, Hughes, C, John, N, Jones, M & Roard, N
Conference PaperProceedings of the 4th UK e-Science All Hands Meeting (AHM'05) (2005)

Abstract

Existing Grid visualization systems typically focus on the distribution onto remote machines of some or all of the processes encompassing the visualization pipeline, with the aim of increasing the maximum data size, achievable frame rates or display resolution. Such systems may rely on a particular piece of visualization software, and require that the end users have some degree of knowledge in its use, and in the concepts of the Grid itself. This paper describes an architecture for Grid visualization that abstracts away from the underlying hardware and software, and presents the user with a generic interface to a range of visualization technologies, switching between hardware and software to best meet the requirements of that user. We assess the difficulties involved in creating such a system, such as selecting appropriate visualization pipelines, deciding how to distribute the processing between machines, scheduling jobs using Grid middleware, and creating a flexible abstract description language for visualization. Finally, we describle a prototype implementation of such a system, and consider to what degree it might meet the requirements of real world visualization users.

Visual supercomputing : technologies, applications and challenges

Brodlie, K, Brooke, J, Chen, M, Chisnal, D, Fewings, A, Hughes, C, John, N, Jones, M, Riding, M & Roard, N
ArticleComputer Graphics Forum (2005)

Abstract

If we were to have a Grid infrastructure for visualization, what technologies would be needed to build such an infrastructure, what kind of applications would benefit from it, and what challenges are we facing in order to accomplish this goal? In this survey paper, we make use of the term ‘visual supercomputing’ to encapsulate a subject domain concerning the infrastructural technology for visualization. We consider a broad range of scientific and technological advances in computer graphics and visualization, which are relevant to visual supercomputing. We identify the state‐of‐the‐art technologies that have prepared us for building such an infrastructure. We examine a collection of applications that would benefit enormously from such an infrastructure, and discuss their technical requirements. We propose a set of challenges that may guide our strategic efforts in the coming years.

At My Office

My office is Room 215 located in the Newton Building on Salford University main campus.

My Calendar

Download

  • image

    Title

    Text

    Bla