No content found

IMPORTANT DATES

Author Deadlines
Call for Paper Submissions
» Journal-first (JIST or JPI) 15 Feb
» Conference EXTENDED
11 April
Acceptance Notification
» Journal-first (JIST or JPI) by 21 April
» Conference 9 May
Final Manuscripts Due
» Journal-first (JIST or JPI) 10 May
» Conference 23 May

Program Deadlines
Registration Opens Early May
Early Registration Ends 8 June
Attending In-person Reg Ends 22 June
Summer School 6 July
Technical Sessions 7-8 July

   

Partners


LIM 2022 Program

Join us in London for a full day of display science courses—LIM 2022 Summer School—followed by two exciting days of technical talks and networking opportunities. In-person space is limited for both events. Online is an option for the Technical Program, but not the Summer School. The Post-LIM2022 Networking Event + Demos in Cambridge are free (end of program for details).

AT-A-GLANCE

6 July: LIM 2022 Summer School
7-8 July: LIM Technical Program
                    Thursday 7 July
                    Friday 8 July
9 July: Post-LIM 2022 Networking Event + demos in Cambridge

LIM Technical Program

Thursday 7 July 2022
Opening Keynote
10:00 – 11:10 London
Session Chair: Rafal Mantiuk, University of Cambridge (UK)
10:00
Foundations of Perception Engineering, Steven M. LaValle, Center for Ubiquitous Computing, University of Oulu (Finland)

Abstract: Virtual reality (VR) technology has enormous potential to transform society by creating perceptual illusions that can uniquely enhance education, collaborative design, health care, and social interaction, all from a distance. Further benefits include highly immersive computer interfaces, data visualization, and storytelling.  We propose in our research that VR and related fields can be reframed as perception engineering, in which the object being engineered is the perceptual illusion itself, and the physical devices that achieve it are auxiliary.

This talk reports on our progress toward developing mathematical foundations that attempt to bring the human-centered sciences of perceptual psychology, neuroscience, and physiology closer to core engineering principles by viewing the design and delivery of illusions as a coupled dynamical system. The system is composed of two interacting entities: The organism and its environment, in which the former may be biological or even an engineered robot. Our vision is that the research community will one day have principled engineering approaches to design, simulation, prediction, and analysis of sustained, targeted perceptual experiences.  It is hoped that this direction of research will offer valuable guidance and deeper insights into VR, robotics, and possibly the sciences that study perception.

11:10 – 11:40
BREAK
Perceptual metrics and optimization
11:40 – 12:50 London
Session Chair: Rafal Mantiuk, University of Cambridge (UK) and Piotr Didyk, Università della Svizzera Italiana (Switzerland)
11:40
Focal Talk: Breaking the Limits of Display and Fabrication using Perception-aware Optimizations, Piotr Didyk, Università della Svizzera Italiana (Switzerland)

Abstract: Novel display devices and fabrication techniques enable highly tangible ways of creating, experiencing, and interacting with digital content. The capabilities offered by these new output devices, such as virtual and augmented reality head-mounted displays and new multi-material 3D printers, make them real game-changers in many fields. At the same time, the new possibilities offered by these devices impose many challenges for content creation techniques regarding quality and computational efficiency. This paper discusses the concept of perception-aware optimizations, which incorporate insights from human perception into computational methods to optimize content according to the capabilities of different output devices, e.g., displays, 3D printers, and requirements of the human sensory system. As demonstrated in this paper, the key advantage of such strategies is that tailoring computation to perceptually-relevant aspects of the content often reduces the computational cost related to the content creation or overcomes certain limitations of output devices. Besides discussing the general concept, the paper presents several specific applications where perception-aware optimization has been proven beneficial. The examples include methods for optimizing visual content for novel display devices that focus on perceived quality and new computational fabrication techniques for manufacturing objects that look and feel like real ones.

12:10
The Effect of Peripheral Contrast Sensitivity Functions on the Performance of the Foveated Wavelet Image Quality Index, Aliakabar Bozorgian, Marius Pedersen, and Jean Baptiste Thomas, Norwegian University of Science and Technology (Norway)

Abstract: The Contrast Sensitivity Function (CSF) is an integral part of objective foveated image/video quality assessment metrics. In this paper, we investigate the effect of a new eccentricity-dependent CSF model on the performance of the foveated wavelet image quality index (FWQI). Our results do not show a considerable change in FWQI performance when it is evaluated against the LIVE-FBT-FCVR 2D dataset. We argue that the resolution of the head-mounted display used in the subjective experiment limits our ability to reveal the anticipated effect of the new CSF on FWQI performance. Introduction

12:30
A Comparative Study on the Loss Functions for Image Enhancement Networks, Aamir Mustafa1, Hongjie You2, and Rafal Mantiuk1; 1University of Cambridge (UK) and 2Huawei Technologies Duesseldorf (Germany)

Abstract: Image enhancement and image retouching processes are often dominated by global (shift-invariant) change of colour and tones. Most “deep learning” based methods proposed for image enhancement are trained to enforce similarity in pixel values and/or in the high-level feature space. We hypothesise that for tasks, such as image enhancement and retouching, which involve a significant shift in colour statistics, training the model to restore the overall colour distribution can be of vital importance. To address this, we study the effect of a Histogram Matching loss function on a state-of-the art colour enhancement network — HDRNet. The loss enforces similarity of the RGB histograms of the predicted and the target images. By providing detailed qualitative and quantitative comparison of different loss functions on varied datasets, we conclude that enforcing similarity in the colour distribution achieves substantial improvement in performance and can play a significant role while choosing loss functions for image enhancement networks

12:50 – 14:00
LUNCH BREAK
Perceptual and automotive displays
14:00 – 15:10 London
Session Chair: Rafal Mantiuk, University of Cambridge (UK) and Tara Akhavan, Faurecia IRYStec Inc. (Canada)
14:00
Focal Talk: Perceptual Displays Today & Tomorrow – Evangelism, Productization and Evaluation, Tara Akhavan, Faurecia IRYStec Inc. (Canada)

Abstract: Evolution of displays in various industries such as consumer electronics, aerospace, automotive, gaming and digital signage has been a key factor to engage and satisfy consumers. In this talk, I focus on Automotive market as one of the fastest growing display markets, but most discussed matters are applicable to other display verticals. Perceptual and Immersive display user experience are attracting more manufacturers these days to build products matching the end user’s expectation and needs while interacting with displays such as: perfect visibility in all conditions, personalization, less eye fatigue, seamless interaction - in one word “mimic real world” experience. Hardware and software advancements are critical but not enough to fulfil today’s knowledgeable consumer demands. The User Experience has become one of the highest priorities leading companies to develop dedicated user experience or UX teams to study the need of the end user. Multi-disciplinary teams of experts are built to tackle the UX optimization challenge.

In this talk I will share our experience evangelizing the benefits of Perceptual displays, challenges of such evangelism, productization of it and most importantly measuring the performance of perceptual displays in a comparable way to traditional displays.

14:30
Does External Illumination Affect Color Acceptability Threshold for a Mixed Display Technology Cockpit?, Pooshpanjan Roy Biswas1, 2, Dominique Dumortier2, Sophie Jost2,Herve Drezet1, and Marie-Laure Avenel1; 1Technocentre Renault, and 2ENTPE, l'école de l'aménagement durable des territoires (France)

Abstract: Color Acceptability is a complex phenomenon. Contrary to perceptibility, color acceptability is defined as the level of color difference that is considered under the limit of preferred color reproduction on two media. A system comprising two automotive OLED and LCD displays was used in this experiment. A previous study by the authors had identified this limit for a daylight scenario where an external illumination of 3000 Lux at 5300K was illuminating the surface of the displays. In this study a night-time driving scenario was simulated with a projector light source illuminating the displays at 50 lux and 1318K. Statistical analysis is used to quantify statistically significant differences between various conditions.

14:50
The Influence of Mismatches between Ambient Illumination and Display Colors on Video Viewers’ Subjective Experiences, Yunyang Shi and Anya Hurlbert, Newcastle University (UK)

Abstract: Mismatches between ambient illumination levels and display luminance can cause poor viewing experiences. This paper explores the influence of chromaticity differences between illumination and display on viewers’ subjective evaluations of color appearance, preference, and visual comfort when watching videos. Results show that when the chromaticity biases of display and illumination are incongruent, viewers like the video less than when the biases are congruent, and find its colors abnormal.

2-Minute Interactive Paper Previews Followed by the Interactive Paper Poster Session
15:10 – 16:30 London
Session Chair:  Javier Vázquez Corral, Universitat Autònoma de Barcelona (Spain)
The Art and Science of Displaying Visual Space, Robert Pepperell and Alistair Burleigh, Fovotec and Cardiff Metropolitan University (UK)

Abstract: This paper considers the problem of how to display visual space naturalistically in image media. A long-standing solution is linear perspective projection, which is currently used in imaging technologies from cameras to 3D graphics renderers. Linear perspective has many strengths but also some significant weaknesses and over the centuries alternative techniques have been developed for creating more naturalistic images. Here we discuss the problem, its scientific background, and some of the approaches taken by artists and computer graphics researchers to find solutions. We briefly introduce our own approach, which is a form of nonlinear 3D geometry modelled on the perceptual structure of visual space and designed to work on standard displays. We conclude that perceptually modelled nonlinear approaches can make 3D imaging technology more naturalistic than methods based on linear perspective.

Effect of Bit-depth in Stochastic Gradient Descent Performance for Phase-only Computer-generated Holography Displays, Andrew C. Kadis, Benjamin Wetherfield, Jinze Sha, Fan Yang, Youchao Wang, and Timothy Wilkinson, University of Cambridge (UK)

Abstract: SGD (Stochastic gradient descent) is an emerging technique for achieving high-fidelity projected images in CGH (computer-generated holography) display systems. For real-world applications, the devices to display the corresponding holographic fringes have limited bit-depth depending on the specific display technology employed. SGD performance is adversely affected by this limitation and in this piece of work we quantitatively com-pare the impact on algorithmic performance based on different bit-depths by developing our own algorithm, Q-SGD (Quantised-SGD). The choice of modulation device is a key decision in the design of a given holographic display systems and the research goal here is to better inform the selection and application of individual display technologies.

Work In Progress: An Exposure Invariant Neural Network for Colour Correction, Abdullah Kucuk and Graham Finlayson, University of East Anglia; Rafal Mantiuk, University of Cambridge; and Maliha Ashraf; University of Liverpool (UK)

In this paper, we observe that the neural net solution - while delivering better colour correction accuracy compared to the simple (and widely deployed) 3x3 linear correction matrix approach - is not exposure invariant. That is to say, the network is tuned to mapping RGBs to XYZs for a fixed exposure level and when this exposure level changes its performance degrades (and it delivers less accurate colour correction compared to the 3x3 matrix approach which is exposure invariant). We go on to investigate two remedies to the exposure variation problem. First, we augment the data we use to train the network to include responses for many different exposures. Second, we redesign the network so, by construction, it is exposure invariant.

Experiments demonstrate that we can make neural nets that deliver good colour correction across exposure changes. Moreover, the correction performance is found to be better compared with linear colour correction. However, the root-polynomial regression method - which is also exposure invariant - performs better than the derived neural net solution.

Work In Progress: Weibull Tone Mapping (WTM) for the Enhancement of Underwater Imagery, Chloe Game¹, Michael Thompson², and Graham Finlayson¹; ¹University of East Anglia and ²Mott Macdonald Ltd. (UK)

In previous work we described tonal enhancements by domain experts (biologists) to aid annotation of underwater seabed habitats. Tone maps were created using a typical, and interactive, curve manipulation GUI with a set of control points. These can be dragged to alter brightness and contrast. Such tools offer bespoke and targeted image enhancements, that are preferred over more general automatic tools, but are too time-consuming to produce for large datasets.

We found that a smoother and simpler approximation of these tonal manipulations could be derived using our Weibull Tone Mapping (WTM) algorithm. This involves fitting a Weibull Distribution (WD) to brightness histograms of input and user-adjusted output images, then solving for the tone map that mapped the underlying distributions to each other. This tone mapping operation (TMO) was preferred to their own bespoke adjustments, for identifying benthic habitats from imagery. WTM therefore provides the necessary building blocks to develop a targeted enhancement algorithm, that can quickly create smooth tonal manipulations.

In this work we explored how widely applicable the WTM algorithm is to underwater images, by focusing on a larger dataset. Specifically, we introduce WTM as a parameterized enhancement tool, in which analysts can specify a desirable target WD that an image can be rendered to, by modifying its two parameters. Under experimental conditions, 10 observers used WTM to enhance images to aid seabed habitat identification. In the event that a suitable WTM adjustment could not be found, observers could interactively manipulate the WTM tone map using an interactive curve tool with 6 moveable control points, until satisfied. We use this opportunity to further explore desirable TMOs and investigate the capability of WTM to simplify control point tone-mapping tools.

We demonstrate that given the choice, experts typically find a WTM enhancement sufficient for their analyses (81% of images) compared to an advanced adjustment from an interactive tool. Interestingly, in the latter cases, we find that the majority (91%) of TMOs could be approximated by our WTM algorithm, using mean CIE ΔE <5 as our threshold for success. Intra and inter-variability of observers was low and image content did not appear to influence observer tool choice.

These results further illustrate that the WD is a good model and target distribution of underwater image histograms. We see that WTM’s usage extends beyond simplification and smoothing of complex and time-consuming tonal manipulations, to a successful and preferred enhancement tool. This data provides the necessary groundwork to investigate whether a suitable WTM can be derived automatically from images.

Work In Progress: Effects of Size on Perception of Display Flicker: Comparison with Flicker Indices, Hyosun Kim , Eunjung Lee, Hyungsuk Hwang, Youra Kim, Dong-Yeol Yeom; Samsung Display (Republic of Korea)

Abstract: Simulating images with 30 Hz, we observed the effect of size on display flicker perception. Additionally, we compared the results with various indices, representing the degree of flickering. As a result, participants perceived flicker to be stronger as the size of stimuli increased. However, none of the flicker indices, such as JEITA, Flicker Visibility, and Flicker Modulation Amplitude reflected this tendency. Since display makers generally use the flicker indices to represent the amount of flicker, these indices need to be supplemented to include the effects of size.

Work In Progress: A Quantum-relativistic Chromatic Adaptation Transform, Nicoletta Prencipe, Université de Bordeaux (France)

The axioms can be summarized in the following statement: the space of perceived colours is the cone of positive elements of a formally real Jordan algebra of dimension 3.

There are only two possible choices for such an algebra, i.e. there are only two types of models. Existing colour spaces fall in the first category, while the second type of model has an intrinsic hyperbolic nature and makes use of the adaptation of mathematical concepts from quantum mechanics and special relativity theory.

At an intuitive level it is not hard to explain why it makes sense to talk about modern physics theories in the colour context. Colour perception is indeed a process based on the duality between context of measure and the observing apparatus (which might be the human visual system or a digital camera). This recalls the duality in quantum mechanics: it makes no sense to talk about a perceived colour without specifying the conditions in which it has been measured. Perceived colours are not absolute, but relative to the viewing conditions.

The interest of this new model is also that it permits to introduce some new proposals for color metrics and transforms naturally arising from the mathematical formulation, which might be of interest in colour image processing.

A consequence of the relativistic nature of the model is a set of transformations, which are well-known in special relativity theory: Lorentz boosts.

We propose to use a normalized Lorentz boost as a chromatic adaptation transform (CAT) for AWB. I will describe two different implementations of the boost CAT: one in the HCV colour space and another in a modified HCV obtained adding Hering's opponency to H. I will discuss both visual and quantitative comparisons of the performance of this new method w.r.t. the classic von Kries diagonal CAT.

Work In Progress: LightSim: A Testbed for Evaluating Color Calibration Kits, Wei-Chung Cheng, Food & Drug Administration (US)

A testbed was developed to spectrally reproduce display stimuli for testing color calibration kits. The testbed, LightSim, consists of a tunable light source (TLS), an integrating sphere, and a spectroradiometer. The testbed was characterized as a 1-pixel, 1,024-primary, 40,000-level display in contrast to the regular n-pixel, 3-primary, 256-level displays. Primary spectra of three displays were used to emulate different color gamuts and lighting methods: a virtual reality device based on OLED (Oculus Rift), a professional-grade display based on CCFL-backlighting (NEC PA271), and a consumer-grade display based on LED-backlighting (HP Z24X) were measured to represent the DCI-P3, AdobeRGB, and sRGB color spaces, respectively.

In the experiment, a color calibration sensor (DataColor Spyder X Elite) was tested with the 24 patches of the ColorChecker. The subject sensor allowed the user to select one of four backlighting modes (“White LED”, “Standard LED”, “General”, and “GB LED”). The experiment results show adequate linearity of luminance responses in the mid-range. Most color differences were less than 2.5 ΔE00, except for the darkest patch #24, indicating the limited capability of measuring dark shades. None of the four backlighting modes outperformed the others, and two blue patches #8 and #13 generated the most diverse results. This exercise demonstrates the utility of the LightSim for emulating arbitrary spectra without employing actual displays based on different backlighting methods.

Displays and HDR
16:30 – 17:30 London
Session Chair: Özgur Yöntem, University of Cambridge (UK)
16:30
Invited Talk: High Dynamic Range Imaging—Technologies, Applications, and Perceptual Considerations, Timo Kunkel, Dolby Laboratories (US)

Abstract: High-Dynamic Range imaging, better known by its acronym “HDR”, has established itself as a foundational component when looking at the aspects defining today’s image fidelity. HDR technology is widely supported by millions of devices from cameras to post-production tools, deployment systems, and displays and is embraced by content creators and providers. HDR imaging is based on several key concepts that facilitate perceptually meaningful, artistically compelling, and technologically effective delivery of movies, TV shows, and video games that are more immersive and realistic than previously possible. This lecture provides an overview of these concepts enabling today’s HDR ecosystem, including perceptual and technological aspects, as well as industry standards, formats, and approaches.

Friday 8 July 2022
Holographic, tensor and wide colour gamut displays
09:15 – 10:40 London
Session Chair: Özgur Yöntem, University of Cambridge (UK) and Kaan Akşit, University College London (UK)
09:15
CONFERENCE WELCOME & AWARDS
09:30
Focal Talk: Perceptually Guided Computer-generated Holography, Kaan Akşit, University College London (UK)

Abstract: Inventing immersive displays that can attain realism in visuals is a long standing quest in the optics, graphics and perception fields. As holographic displays can simultaneously address various depth levels, experts from industry and academia often pitch these holographic displays as the next-generation display technology that could lead to such realism in visuals. However, holographic displays demand high computational complexity in image generation pipelines and suffer from visual quality-related issues.

This talk will describe our research efforts to combine visual perception related findings with Computer-Generated Holography (CGH) to achieve realism in visuals and derive CGH pipelines that can run at interactive rates ( above 30 Hz). Specifically, I will explain how holographic displays could effectively generate three-dimensional images with good image quality and how these images could be generated to match the needs of human visual perception in resolution and statistics. Furthermore, I will demonstrate our CGH methods running at interactive rates with the help of learning strategies. As a result, we provide a glimpse into a potentialfuture where CGH helps to replace two-dimensional images generated on today's displays with authentic three-dimensional visuals that are perceptually realistic.

10:00
Towards Non-Lambertian Scenes for Tensor Displays, Eline Soetens, Armand Losfeld, Daniele Bonatto, Sarah Fachada, Laurie Van Bogaert, Gauthier Lafruit, and Mehrdad Teratani, Université Libre de Bruxelles (Belgium)

Abstract: Tensor displays are screens able to render a light field with correct depth perception without wearing glasses. Such devices have already been shown to be able to accurately render a scene composed of Lambertian objects. This paper presents the model and prototyping of a tensor display with three layers, using repurposed computer monitors, and extends the light field factorization method to non-Lambertian objects. Furthermore, we examine the relation and limitations between the depth-of-field and the depth range with Lambertian and non-Lambertian scenes. Non-Lambertian scenes contain out-of-range disparities that can not be properly rendered with the usual optimization method. We propose to artificially compress the disparity range of the scene by using two light fields focused on different depths, effectively solving the problem and allowing to render the scene clearly on both simulated and prototyped tensor display.

10:20
Probing Perceptual Phenomena for Color Management, Trevor Canham and Marcelo Bertalmío, Spanish National Research Council - CSIC (Spain)

Abstract: Advancement of color management techniques is required to accommodate for emerging formats and devices. To address this, experiments were conducted by the authors in order to characterize and account for the effect of metamerism error, chromatic adaptation to the surround, contrast adaptation to display dynamic range, and viewing size dependent effects of retinal signal pooling. These topics were assembled to address perceptual representation inconsistencies which are becoming more common with the popularity of mobile, High Dynamic Range (HDR), and Wide Color Gamut (WCG) displays. In this paper, we briefly summarize the findings of these efforts and compile a series of takeaways which are key to the problem of perceptual color management. Furthermore, we discuss the implications of these take-aways on the advancement of the field.

10:40 – 11:10
BREAK
VR/AR and volumetric content
11:10 – 12:20 London
Session Chair: Özgur Yöntem, University of Cambridge (UK); and Aljosa Smolic, Lucerne University of Applied Sciences and Arts (Switzerland) and Trinity College Dublin (Ireland)
11:10
Focal Talk: Volumetric Video Content Creation for Immersive XR Experiences, Aljosa Smolic1, 3, Konstantinos Amplianitis2, 3, Matthew Moynihan3, Neill O’Dwyer3, Jan Ondrej2, 3, Rafael Pagés2, 3, Gareth W. Young3, and Emin Zerman3; 1Lucerne University of Applied Sciences and Arts (Switzerland), 2Volograms Limited (Ireland), and 3Trinity College Dublin (Ireland)

Abstract: Volumetric video (VV) is an emergent digital media that enables novel forms of interaction and immersion within eXtended Reality (XR) applications. VV supports 3D representation of real-world scenes and objects to be visualized from any viewpoint or viewing direction; an interaction paradigm that is commonly seen in computer games. This allows for instance to bring real people into XR. Based on this innovative media format, it is possible to design new forms of immersive and interactive experiences that can be visualized via head-mounted displays (HMDs) in virtual reality (VR) or augmented reality (AR). This paper highlights technology for VV content creation developed by the V-SENSE lab and the startup company Volograms. It further showcases a variety of creative experiments applying VV for immersive storytelling in XR.

11:40
A Hybrid Multi-view and Eye-tracked Transparent Autostereoscopic Display for Augmented Reality, Charlie Schlick1, 2, Thomas Crespel1, 3, and Xavier Granier3, 4, and Patrick Reuter1, 2; 1University of Bordeaux, Inria, 2Bordeaux INP, CNRS (LaBRI UMR 5800), 3Institut d’Optique Graduate School - CNRS (LP2N UMR 5298), 4CNRS (Archéosciences Bordeaux UMR 6034) (France)

Abstract: We put forward the use of transparent 3D displays for augmented reality. For a glasses-free experience with autostereoscopy and a large viewing area, we study the use of a recent transparent display with multiple discrete and horizontally adjacent viewing zones. Although promising, this display cannot directly be used for augmented reality due to inconsistencies within and between the discrete viewing zones. In this work, we propose to overcome this limitation by tracking the user’s eyes for ensuring continuous transitions, thus making the display feasible for augmented reality. In particular, we compensate the intensity variations, we ensure a consistent horizontal parallax within and between the adjacent viewing zones, and we add vertical parallax. In this way, the display becomes a transparent augmented window that can be used for various augmented reality applications. We present results on a display with 5 viewing zones for three different use cases, evaluate the appropriateness, discuss the limitations, and show future directions.

12:00
Augmented Reality for Automatically Generating Robust Manufacturing and Maintenance Logs, Tim Schoonbeek1, Pierluigi Frisco2, Hans Onvlee2, Peter H.N. de With1, and Fons van der Sommen11Eindhoven University of Technology and 2ASML (the Netherlands)

Abstract: Logs describing the execution of procedural steps during manufacturing and maintenance tasks are important for quality control and configuration management. Such logs are currently hand-written or typed during a procedure, which requires engineers to frequently step away from their work and results in difficulties for searching and optimizing logs. In this paper, we propose to automatically generate standardized, searchable logs, by visually perceiving and monitoring the progress of the procedure in real-time, and comparing this to the expected procedure. Unlike related work, we propose an approach which does not restrict the engineers to rigid, sequential sequences and instead allows them to execute procedures in a variety of different sequences where possible. The proposed framework is experimentally validated on the task of (dis)assembling a Duplo block model and operates properly when occlusions are absent.

12:20 – 13:50
LUNCH BREAK
Colour
13:50 – 15:00 London
Session Chair: Özgur Yöntem, University of Cambridge (UK) and Hao Xie, Rochester Institute of Technology (US)
13:50
Focal Talk: Point and Line to Surface: The Geometric Elements of Display Color Modeling, Hao Xie, Rochester Institute of Technology (US)

Abstract: Color appearance is multidimensional, and color space has been a useful geometric representation for display modeling and optimization. However, the three fundamental attributes of color, i.e., brightness, saturation, and hue, have not found their singly corresponding physical correlates. Changes along one physical dimension interfere with other color attributes, which has been a deficiency of the existing color spaces, particularly prevalent for high-dynamic-range and wide-color-gamut displays. This paper describes how we set out to develop independent color scales for each attribute. Based on both psychophysical experiments and computational modeling, the surfaces/lines of equal brightness/saturation, as well as the boundaries between surface versus illumination color modes, have been characterized. Furthermore, the independent relations between those new scales have been quantitatively evaluated. Those results promise a new color representation that is more intuitive and efficient for color controls in displays.

14:20
Comparison of Regression Methods and Neural Networks for Colour Correction, Abdullah Kucuk1, Graham Finlayson1, Rafal Mantiuk2, and Maliha Ashraf3; 1University of East Anglia, 2University of Cambridge, and 3University of Liverpool (UK)

Abstract: Colour correction is the problem of mapping the sensor responses measured by a camera to the display-encoded RGBs or to a standard colour space such as CIE XYZ. In regression-based colour correction, camera RAW RGBs are mapped according to a simple formula (e.g. a linear mapping). Regression methods include least squares, polynomial and root-polynomial approaches. More recently, researchers have begun to investigate how neural networks can be used to solve the colour correction problem.

In this paper, we investigate the relative performance of regression versus a neural network approach. While we find that the latter approach performs better than simple least-squares the performance is not as good as that delivered by either root-polynomial or polynomial regression. The root-polynomial approach has the advantage that it is also exposure invariant. In contrast, the Neural Network approach delivers poor colour correction when the exposure changes.

14:40
Colour Difference Formula for Photopic and Mesopic Vision Incorporating Cone and Rod Responses, Maliha Ashraf1, Rafal Mantiuk2, Graham Finlayson3, Abdullah Kucuk3, and Sophie Wuerger1; 1University of Liverpool, 2University of Cambridge, and 3University of East Anglia (UK)

Abstract: The standard colour difference formulas, such as CIEDE2000, operate on colours defined by cone-fundamentals, which ignore the influence of rods on colour perception. In this work, we combine the rod intrusion model by Cao et al. with the popular CIEDE2000 colour difference formula and validate the accuracy of the new formula on three contrast sensitivity datasets. When compared with the standard CIEDE2000 formula, the new colour difference formula improves the perceptual uniformity of the space at low luminance levels.

15:00 – 15:30
BREAK
Closing Keynote
15:30 – 16:40 London
Session Chair: Özgur Yöntem, University of Cambridge (UK)
15:30
The Display of Perception and the Perception of Displays, Robert Pepperell, Fovotec Ltd/Cardiff Metropolitan University (UK)

Abstract: In this talk I consider the problem of how to display visual space naturalistically in image media. We can think of visual space—the 3D space we experience—as a kind of internal display in the head that shows us the world outside. Artists and technologists have long been interested in how to emulate visual space on external displays such as paintings, photographs, and electronic screens in a way that looks as natural as possible. A long-standing solution is linear perspective projection, which is currently used in imaging technologies from cameras to 3D renderers. Linear perspective has many advantages but also some significant limitations. Over the centuries alternative techniques have been developed for creating more naturalistic image media. I discuss the problem of how best to emulate the internal display on an external display, some of the historical solutions to the problem, and introduce a new solution. This is a form of nonlinear 3D rendering modelled on the structure of human visual space. I conclude that nonlinear human-centred approaches to 3D imaging can create more naturalistic image media than methods based on techniques such as linear perspective.

16:30
Awards Announced; Closing Remarks
Saturday 9 July 2022
Post-LIM 2022 Networking Event + demos in Cambridge
10:00 – 13:00 CAMBRIDGE (arrive between 10:00 - 11:00)
Visit Cambridge after LIM 2022. Network and see demos of some of the HDR and 3D display prototypes + other work done in the Graphics and Displays group headed by Prof. Rafal Mantiuk. Among other things, see the 3D HDR hyper-realistic display. Cambridge is a 50-minute train ride from London King Cross station + a bus/uber/taxi to the event venue. Transportation to Cambridge is on your own.

This event is free for anyone to attend. DETAILS and REGISTRATION

No content found

No content found

No content found

No content found

No content found