No content found

SPONSOR

           

CO-ORGANIZER



IMPORTANT DATES

Author Deadlines
Call for Paper Submissions
» LATE BREAKING POSTERS 10 June
» Journal-first (JIST or JPI) 15 March
» Conference
25 March
» LATE BREAKING POSTERS 10 June
Acceptance Notification
» LATE BREAKING POSTERS 14 June
» Journal-first (JIST or JPI) 10 April
» Conference 3 May
Final Manuscripts Due
» Journal-first (JIST or JPI) 28 May
» Conference 17 May

Program Deadlines
Registration Opens mid-May
Reg Ends 19 June
Summer School 26 June
Technical Sessions 27-28 June

 

LIM 2024 Committee

General Chairs
Michael S. Brown, York University (Canada)
Chaker Larabi, University of Poitiers (France)
Steering Committee
Marina Bloj, University of Bradford (UK)
Susan Farnand, Rochester Institute of Technology (US)
Graham Finlayson, University of East Anglia (UK)
Martin Gouch, FFEI (UK)
Susanne Klein, University of the West of England (UK)
Simonot Lionel, University of Poitiers (France)
Rafal Mantiuk, University of Cambridge (UK)
Sophie Triantaphillidou, NTNU (Norway)
Javier Vázquez Corral, Universitat Autònoma de Barcelona (Spain)

LIM 2024 Program

Join us in London for a full day of image capture courses—LIM 2024 Summer School—followed by two exciting days of technical talks and networking opportunities.

AT-A-GLANCE

26 June: LIM 2024 Summer School
27-28 June: LIM Technical Program

LIM Technical Program

Thursday 27 June 2024
REGISTRATION OPEN / WELCOME COFFEE
9:00 – 9:30

Welcome and OPENING KEYNOTE
9:30 – 10:30
Session chairs: General Co-chairs Michael Brown, York University, and Chaker Larabi, University of Poitiers
Do We Really Need More than 3 Channels in our Camera?, Jon Y. Hardeberg, professor of colour imaging, The Norwegian University of Science and Technology (NTNU), and CEO, Spektralion AS (Norway)

Abstract: Spectral imaging has been an active field of research and development for several decades. An often recurring question is: how many channels do we need? Reasons to ask this question are many, including aiming to develop faster and cheaper spectral imaging devices. The adequate answer is mostly: it depends... Recently many researchers have achieved promising results trying to estimate spectral data from conventional RGB image sensors, using for instance deep learning models. On the other hand we also see an increasing number of spectral imaging solutions in the market, including for instance those based on spectral filter arrays. This makes it relevant to ask again this question—and specifically whether RGB is actually enough.

In this talk, after an overall introduction to the spectral imaging concept, we present and discuss application areas where spectral imaging may be particularly useful. We then present a spectral image processing and analysis pipeline from scene to pixel to end user, and give some examples of recent research in some core topics like spectral filter array demosaicking, illuminant estimation, and scene recognition. Finally we discuss the possibilities and limitations of estimating or reconstructing spectral curves from RGB image data, with the aim of providing an answer to our big question.

10:30 – 11:00
COFFEE BREAK
FOCAL I
11:00 – 11:30
Session Chairs: Michael Brown, York University, and Chaker Larabi, University of Poitiers
Advances in Image Aesthetics Assessment: Concepts, Methods, and Applications, Luigi Celona and Simone Bianco, University of Milano-Bicocca (Italy)

Abstract: Image Aesthetic Assessment (IAA) has attracted increasing attention recently but it is still challenging due to its high abstraction and complexity. In this talk, recent advancements in IAA are explored, emphasizing the goal, complexity, and critical role this task plays in improving visual content. Insights from our recent studies are combined to present a unified perspective on the state of IAA, focusing on methods relying on the use of genetic algorithms, language-based understanding, and composition-attribute guidance. These methods are examined for their potential in practical applications like content selection and quality enhancement, such as autocropping. The talk concludes with an overview of the challenges and future directions in this field.

Camera imaging performance & image quality
11:30 – 12:30
Session Chair: Luigi Celona, University of Milano-Bicocca (Italy)
11:30
Evaluation of Bright and Dark Details in HDR Scenes: A Multitask CNN Approach, Gabriel Pacianotto, Daniela Carfora, Franck Xu, Sira Ferradans, and Benoit Pochon, DXOMARK (France)

Abstract: High dynamic range (HDR) scenes are known to be challenging for most cameras. The most common artifacts associated with bad HDR scene rendition are clipped bright areas and noisy dark regions, rendering the images unnatural and unpleasing. This paper introduces a novel methodology for automating the perceptual evaluation of detail rendition in these extreme regions of the histogram for images that portray natural scenes. The key contributions include 1) the construction of a robust database in Just Objectionable Distance (JOD) scores, incorporating annotator outlier detection 2) the introduction of a Multitask Convolutional Neural Network (CNN) model that effectively addresses the diverse context and region-of-interest challenges inherent in natural scenes. Our experimental evaluation demonstrates that our approach strongly aligns with human evaluations. The adaptability of our model positions it as a valuable tool for ensuring consistent camera performance evaluation, contributing to the continuous evolution of smartphone technologies.

11:45
The Influence of Read Noise on Automatic License Plate Recognition System, Nikola Plavac, Seyed Ali Amirshahi, Marius Pedersen, and Sophie Triantaphillidou, Norwegian University of Science and Technology (Norway)

Abstract: This study aims to investigate how a specific type of distortion in imaging pipelines, such as read noise, affects the performance of an automatic license plate recognition algorithm. We first evaluated a pretrained three-stage license plate recognition algorithm using undistorted license plate images. Subsequently, we applied 15 different levels of read noise using a well-known imaging pipeline simulation tool and assessed the recognition performance on the distorted images. Our analysis reveals that recognition accuracy decreases as read noise becomes more prevalent in the imaging pipeline. However, we observed that, contrary to expectations, a small amount of noise can increase vehicle detection accuracy, particularly in the case of the YOLO-based vehicle detection module. The part of the automatic license plate recognition system that is mostly prone to errors and is mostly affected by read noise is the optical character recognition module. The results highlight the importance of considering imaging pipeline distortions when designing and deploying automatic license plate recognition systems.

12:00
Unifying Path and Center-surround Retinex Algorithms, Afsaneh Karami and Graham Finlayson, University of East Anglia (UK)

Abstract: Retinex is a theory of colour vision, and it is also a well-known image enhancement algorithm. The Retinex algorithms reported in the literature are often called path-based or centre-surround. In the path-based approach, an image is processed by calculating (reintegrating along) paths in proximate image regions and averaging amongst the paths. Centre-surround algorithms convolve an image (in log units) with a large-scale centre-surround-type operator. Both types of Retinex algorithms map a high dynamic range image to a lower-range counterpart suitable for display, and both are proposed as a method to simultaneously enhance an image for preference.

In this paper, we reformulate one of the most common variants of the path-based approach and show that it can be recast as a centre-surround algorithm at multiple scales. Significantly, our new method processes images more quickly and is potentially biologically plausible. To the extent that Retinex produces pleasing images, it produces equivalent outputs. Experiments validate our method..

12:15
Colourlab Image Database: Optical Aberrations, Raed Hlayhel, Mobina Mobini, Bidossessi Emmanuel Agossou, Marius Pedersen, and Seyed Ali Amirshahi, Norwegian University of Science and Technology (Norway)

Abstract: For image quality assessment, the availability of diverse databases is vital for the development and evaluation of image quality metrics. Existing databases have played an important role in promoting the understanding of various types of distortion and in the evaluation of image quality metrics. However, a comprehensive representation of optical aberrations and their impact on image quality is lacking. This paper addresses this gap by introducing a novel image quality database that focuses on optical aberrations. We conduct a subjective experiment to capture human perceptual responses on a set of images with optical aberrations. We then test the performance of selected objective image quality metrics to assess these aberrations. This approach not only ensures the relevance of our database to real-world scenarios but also contributes to ensuring the performance of the selected image quality metrics. The database is available for download at https://www.ntnu.edu/colourlab/software.

12:30 – 13:30
LUNCH BREAK
AFTERNOON KEYNOTE
13:30 – 14:30
Session Chairs: Michael Brown, York University, and Chaker Larabi, University of Poitiers
Probing Life's Secrets - How Quantum Sensing is Revolutionising Biological Imaging, Melissa Mathers, professor, Quantum Sensing and Engineering, University of Nottingham (UK)

Abstract: Biological imaging has long strived for a balance: achieving exquisite detail while preserving the delicate nature of living systems. Traditional techniques often fall short, introducing disruptions that hinder our understanding of healthy and diseased states. This presentation will start with a historical introduction to biological imaging and move to explore the exciting field of quantum sensing, a field poised to transform biological imaging. By harnessing the power of quantum mechanics, we can achieve unprecedented sensitivity, allowing us to unravel the mysteries of cellular processes at a fundamental level. This talk aims to ignite your curiosity and offer a glimpse into a future where quantum sensing revolutionises biological research. Through a convergence of aligned technologies, there is tremendous potential to truly probe life's secrets with unprecedented clarity and minimal disruption.

14:30 – 15:00
COFFEE BREAK
FOCAL II
15:00 – 15:30
Session Chairs: Michael Brown, York University, and Chaker Larabi, University of Poitiers
Additive vs. Multiplicative Luminosity-color Decomposition, David Alleysson, Université Grenoble-Alpes (France)

Abstract: The visual system decomposes light entering the eye into an achromatic and a chromatic signal. Knowing whether this decomposition is additive or multiplicative is still a current research question. Luminance has been found to be additive when measured physiologically but multiplicative in appearance. Demosaicing multispectral images (shoot through a color or spectral filter array), show how additive decomposition is a linear solution to the inverse problem of mosaicing. But for reflectance estimation, a multiplicative decomposition would be preferred. Both decomposition imply two different geometries that share their vector spaces but not their metric.

imaging SYSTEMS
15:30 – 16:15
Session Chair: David Alleysson, Université Grenoble-Alpes (France)
15:30
Auto-MAT: Image Denoising via Automatic In-painting, Abdullah Hayajneh and Erchin Serpedin, Texas A&M University (US); and Mitchel Stotland, Sidra Medicine (Qatar)

Abstract: This paper introduces an innovative blind in-painting technique designed for image quality enhancement and noise removal. Employing Monte-Carlo simulations, the proposed method approximates the optimal mask necessary for automatic image in-painting. This involves the progressive construction of a noise removal mask, initially sampled randomly from a binomial distribution. A confidence map is iteratively generated, providing a pixel-wise indicator map that discerns whether a particular pixel resides within the dataset domain. Notably, the proposed method eliminates the manual creation of an image mask to eradicate noise, a process prone to additional time overhead, especially when noise is dispersed across the entire image. Furthermore, the proposed method simplifies the determination of pixels involved in the in-painting process, excluding normal pixels and thereby preserving the integrity of the original image content. Computer simulations demonstrate the efficacy of this method in removing various types of noise, including brush painting and random salt and pepper noise. The proposed technique successfully restores similarity between the original and normalized datasets, yielding a Binary Cross Entropy (BCE) of 0.69 and a Peak-Signal-to-Noise-Ratio (PSNR) of 20.069. With its versatile applications, this method proves beneficial in diverse industry and medical contexts.

15:45
A Novel Multimodal 3D Depth Sensing Device, Jian Ma, Shenge Wang, Matthieu Dupre, Ioannis Nousias, and Sergio Goma, Qualcomm Technologies, Inc. (US)

Abstract: We introduce an innovative 3D depth sensing scheme that seamlessly integrates various depth sensing modalities and technologies into a single compact device. Our approach dynamically switches between depth sensing modes, including iTOF and structured light, enabling real-time data fusion of depth images. We successfully demonstrated iToF depth imaging without multipath interference (MPI), simultaneously achieving high image resolution (VGA) and high depth accuracy at a frame rate of 30 fps.

16:00
A Multimode Quantum Optics Approach to Incoherent Imaging, Giacomo Sorelli, Fraunhofer IOSB (Germany)

Abstract: Recent works employing tools from quantum optics and quantum metrology proposed a new passive imaging technique that allows to resolve details far below the diffraction limit. This technique is based on replacing standard spatially-resolved intensity measurements, e.g. at each pixel of a camera, with spatial-mode demultiplexing (SpaDe) measurements that allow to acquire information in a more efficient way. In this contribution, we want to provide an intuitive explanation of why such a SpaDe approach is so effective, and illustrate how we used these idea to discriminate one point source from two, and to estimate the separation between two incoherent sources.

POSTER PREVIEWS
16:15 – 16:35
Session Chairs: Michael Brown, York University, and Chaker Larabi, University of Poitiers
A True Panoramic Camera for Smartphone Applications, Jian Ma1, Shenge Wang1, Yunwen Li2, Adrian Giura1, Chris Miclea1, and Sergio Goma1; 1Qualcomm Technologies, Inc. (US) and 2Qualcomm Semiconductor Limited (Taiwan)

Abstract: We introduce our cutting-edge panoramic camera – a true panoramic camera (TPC), designed for mobile smartphone applications. Leveraging prism optics and well-known imaging processing algorithms, our camera achieves parallax-free seamless stitching of images captured by dual cameras pointing in two different directions opposite to the normal. The result is an ultra-wide (140°x53°) panoramic field-of-view (FOV) without the optical distortions typically associated with ultra-wide-angle lenses.

Packed into a compact camera module measuring 22 mm (length) x 11 mm (width) x 9 mm (height) and integrated into a mobile testing platform featuring the Qualcomm Snapdragon® 8 Gen 1 processor, the TPC demonstrates unprecedented capabilities of capturing panoramic pictures in a single shot and recording panoramic videos.

The Importance of Object-to-Background Distance when Evaluating Perceived Transmittance, Rafique Ahmed and Davit Gigilashvili, Norwegian University of Science and Technology (Norway)

Abstract: Transparent and translucent objects transmit part of the incident radiant flux permitting a viewer to see the background through them. Perceived transmittance and how the human visual system assigns transmittance to flat filters has been a topic of scholarly interest. However, these works have usually been limited to the role of filter’s optical properties. Readers may have noticed in their daily lives that objects close behind a frosted glass are discernible, but other objects even slightly further behind are virtually invisible. The reason for this lies in geometrical optics and has been mostly overlooked or taken for granted from the perceptual perspective. In this work, we investigated whether the distance between a translucent filter and a background affects perceived transmittance of the filter, or whether observers account for this distance and assign transmittance to the filters in a consistent manner. Furthermore, we explored whether the trend holds for broad range of materials. For this purpose, we created an image dataset where a broad range of real physical flat filters were photographed at different distances from the background. Afterward, we conducted a psychophysical experiment to explore the link between the object-to-background distance and perceived transmittance. We found that the results vary and depend on filter’s optical properties. While transmittance was judged consistently for some filters, for others it was highly underestimated when the background moved further away.

Gamma Maps: Non-linear Gain Maps for HDR Reconstruction, Trevor D. Canham1, SaiKiran K. Tedla1, Michael J. Murdoch2, and Michael S. Brown1; 1York University (Canada) and 2Rochester Institute of Technology (US)

Abstract: To accommodate displays with varying dynamic ranges, image encoding frameworks are emerging that propose to include metadata within a standard dynamic range (SDR) image to encode an arbitrary, user-defined residual which allows the SDR image’s pixel values to be transformed into its intended high dynamic range (HDR) version. The suggested metadata is a compressed version of the gain map computed as the pixel-wise ratio between the HDR and SDR image. Multiplying the gain map with the SDR image reconstructs the HDR image. This paper proposes an effective alternative for HDR recovery in the form of a pixel-wise exponent map instead of the multiplicative gain map. We demonstrate experimentally that the exponent map approach produces higher quality HDR reconstructions over the gain map strategy according to several metrics.

Investigation of the Performance of Pixel-domain 2D-JND Models for 360-degree Imaging, Rivo T. Andriamanalina1, 2, Mohamed-Chaker Larabi1, and Steven Le Moan2; 1University of Poitiers (France) and 2Norwegian University of Science and Technology (Norway)

Abstract: Spatial just noticeable difference (JND) refers to the smallest amplitude of variation that can be reliably detected by the Human Visual System (HVS). Several studies tried to define models based on thresholds obtained under controlled experiments for conventional 2D or 3D imaging. While the concept of JND is almost mastered for the latter types of content, it is legitimate to question the validity of the results for Extended Reality (XR) where the observation conditions are significantly different. In this paper, we investigate the performance of well-known 2D-JND models on 360-degree images. These models are integrated into basic quality assessment metrics to study their ability to improve the quality prediction process with regards to the human judgment. Here, the metrics serve as tools to assess the effectiveness of the JND models. In addition, to mimic the 360-deg conditions, the equator bias is used to balance the JND thresholds. Globally, the obtained results suggest that 2D-JND models are not well adapted to the extended range conditions and require in-depth improvement or re-definition to be applicable. The slight improvement obtained using the equator bias demonstrates the potential of taking into account XR characteristics and opens the floor for further investigations.

A Pipeline for Characterising Virtual Reality Head Mounted Displays, Ujjayanta Bhaumik, Laurens Van de Perre, and Frédéric B. Leloup, KU Leuven (Belgium)

Abstract: With the increasing popularity across various scientific research domains, virtual reality serves as a powerful tool for conducting colour science experiments due to its capability to present naturalistic scenes under controlled conditions. In this paper, a systematic approach for characterising the colorimetric profile of a head mounted display is proposed. First, a commercially available head mounted display, namely the Meta Quest 2, was characterised by aid of a colorimetric luminance camera. Afterwards, the suitability of four different models (Look-up Table, Polynomial Regression, Artificial Neural Network and Gain Gamma Offset) to predict the colorimetric features of the head mounted display was investigated.

Automated Point Cloud Filtration Through Minimization of Point Cloud Metrics, Michael Holm and Eliot Winer, Iowa State University (US)

Abstract: Point clouds generated from 3D scans of part surfaces consist of discrete points, some of which may be outliers. Filtering techniques to remove these outliers from point clouds frequently require a “guess and check” method to determine proper filter parameters. This paper presents two novel approaches to automatically determine proper filter parameters using the relationships between point cloud outlier removal, principal component variance, and the average nearest neighbor distance. Two post-processing workflows were developed that reduce outlier frequency in point clouds using these relationships. These post-processing workflows were applied to point clouds with artificially generated noise and outliers, as well as a real-world point cloud. Analysis of the results showed the approaches effectively reduced outlier frequency while preserving the ground truth surface, without requiring user input.

Investigation on Low-light Image Enhancement based on Multispectral Reconstruction, Jinxing Liang1,4, Zhuan Zuo1, Lei Xin2, Xiangcai Ma3, Hang Luo1, Xinrong Hu1, and Kaida Xiao4; 1Wuhan Textile University (China), 2Wuchang University of Technology (China), 3Shanghai Publishing and Printing College (China), and 4University of Leeds (UK)

Abstract: Low-light image enhancement is a hot topic as the low-light image cannot accurately reflect the content of objects. The use of low-light image enhancement technology can effectively restore the color and texture information. Different from the traditional low-light image enhancement method that is directly from low-light to normal-light, the method of low-light image enhancement based on multispectral reconstruction is proposed. The key point of the proposed method is that the low-light image is firstly transformed to the spectral reflectance space based on a deep learning model to learn the end-to-end mapping relationship from a low-light image to a normal-light multispectral image. Then the corresponding normal-light color image is rendered from the reconstructed multispectral image and the enhancement of the low-light image is completed. The motivation behind the proposed method is whether the low-light image enhancement through multispectral reconstruction will help to improve the enhancement performance or not. The verification of the proposed method based on the commonly used LOL dataset showed it outperforms the traditional direct enhancement methods, however, the underlying mechanism of the method is still to be further studied.

SiCAM: Spectral Image Color Appearance Model, Aqsa Hassan1,2, Giorgio Trumpy2, Susan Farnand1, and Mekides Assefa Abebe1; 1Rochester Institute of Technology (US) , and 2Norwegian University of Science and Technology (Norway)

Abstract: In the past, several research studies have highlighted the idea that spectral data produces better tone-accurate images. Inspired by these studies, this paper introduces the spectral image color appearance model titled SiCAM, designed for tone mapping an HDR hyperspectral radiance cube to a three-channel LDR image. It is to be noted that SiCAM is inspired by the iCAM06 image color appearance model, where we adapted the iCAM06 for hyperspectral input by embedding a spectral adaptation transformation, extending the existing chromatic adaptation transform (CAT) method. Additionally, we conducted a psychophysical experiment to evaluate the proposed model and the effectiveness of having spectral data instead of traditional three-channel input, for HDR image rendering. The proposed model is also assessed in comparison to the performance of iCAM06 and the gamma tone mapping approaches. The subjective evaluation indicates that SiCAM either outperformed these methods in terms of both accurate color appearance and pleasantness or at least generated comparable results. This also hints that the spectral information might be able to improve not only the acquisition capabilities but also display rendering. Due to the lack of publicly available HDR spectral datasets, we captured the HDR hyperspectral radiance images of four different HDR scenes which will be made available along with the related source code.

POSTER SESSION WITH DRINKS
16:35 – 18:00

Join colleagues to discuss the poster papers with their authors.

 

Friday 28 June 2024
FOCAL TALK III
9:00 – 9:30
Session Chairs: Michael Brown, York University, and Chaker Larabi, University of Poitiers
Assessment of HDR-formats: Challenges of Perceptual Evaluation and Objective Measurements of Camera Captured Contents, Objective Measurements of Camera Captured Contents, Benoit Pochon, DXOMARK Image Labs (France)

Abstract: Recent cameras, especially smartphones, provide HDR formats for capturing videos and photos. For end-users, these formats hold great potential to enhance the visualization experience of captured content on supported displays. Consequently, there is a need to rigorously and objectively evaluate the content produced in HDR-Formats. In this article, we will address the current challenges in perceptual evaluation and objective measurement of camera footage in HDR formats, taking a practical perspective. Based on the results of a perceptual experiment conducted with HDR video formats, we will underline the importance of viewing conditions and signals levels, and list open questions about evaluating HDR still images. In a second part, we will provide an overview of objective measurements for HDR formats with the use of ICtCp.

10:30 –  11:00
Coffee Break
HDR and Multispectral imaging
9:30 – 10:30
Session Chair: Benoit Pochon, DXOMARK Image Labs (France)
9:30
SDR Image Reconstruction for the Improvement of Nighttime Traffic Classification Using a New HDR Traffic Dataset, Mark Benyamin, Ulrich Schwanecke, Mike Christmann, and Rolf Hedtke, RheinMain University of Applied Sciences (Germany)

Abstract: In order to improve traffic conditions and reduce carbon emissions in urban areas, smart mobility and smart cities are becoming increasingly important measures. To enable the widespread use of the cameras required for this, cost and size requirements necessitate the use of low-cost standard dynamic range (SDR) cameras. However, these cameras do not provide sufficient image quality for a reliable classification of road users, especially at night.
In this paper, we present a data-driven approach to optimise image quality and improve classification accuracy of a given vehicle classifier at night. Our approach uses a combination of image inpainting and high dynamic range (HDR) image reconstruction to reconstruct and optimise critical image areas. Therefore, we introduce a large HDR traffic dataset with time-synchronised SDR images. We also present an approach to automatically degrade the HDR traffic data to generate relevant and challenging training pairs. We show that our approach significantly improves the classification of road users at night without having to retrain the underlying vehicle classifier. Supplementary information as well as the dataset are published at https://www.mt.hs-rm.de/nighttime-traffic-reconstruction/.

9:45
A Neural Approach for Skin Spectral ReconstructionFereshteh Mirjalili1 and Giuseppe Claudio Guarnera1,2 ; 1 Norwegian University of Science and Technology (Norway) and 2 University of York (UK)

Abstract: In a computer-generated holographic projection system, the image is reconstructed via the diffraction of light from a spatial light modulator. In this process, several factors could contribute to non-linearities between the reconstruction and the target image. This paper evaluates the non-linearity of the overall holographic projection system experimentally, using binary phase holograms computed using the one-step phase retrieval (OSPR) algorithm, and then applies a digital pre-distortion (DPD) method to correct for the non-linearity. Both a notable increase in reconstruction quality and a significant reduction in mean squared error were observed, proving the effectiveness of the proposed DPD-OSPR algorithm.

10:00
Design of a Snapshot Hyperspectral Gonioradiometer for Appearance Characterization, Nathan Slembrouck, Jan Audenaert, and Frédéric B. Leloup, KU Leuven (Belgium)

Abstract: Gonioradiometry plays a fundamental role in understanding the scattering properties of materials. As light interacts with surfaces, its scattering behaviour varies across different incident angles, wavelengths, and surface characteristics. Gonioradiometric measurements offer a systematic approach to quantify these intricate scattering patterns, by means of the Bidirectional Scattering Distribution function. In this paper, a new approach is presented to quantify this Bidirectional Scattering Distribution Function, for which an existing measurement instrument has been enhanced by incorporation of a hyperspectral imaging device. The hyperspectral imaging system enables detailed spectral reflectance data collection for each pixel, paving the way for measuring samples where the reflectance properties vary along the surface. Challenges such as zooming and dynamic range constraints are addressed, with the paper detailing the design and evaluation of the system. The hyperspectral gonioradiometer offers promising avenues for future research and applications in visual appearance metrology and material characterisation.

10:15
Is Multispectral Enough? An Evaluation on the Performance of Multispectral Images in Pigment Unmixing Task, Mitra Amiri and Giorgio Trumpy, Norwegian University of Science and Technology (Norway)

Abstract: Multispectral imaging in contrast with hyperspectral imaging is a cheaper and more accessible method with a feasibly mobile setup. However, the restrained spectral resolution of multispectral images is a limitation that influences the applicability of this method in different fields. In this study, we tried to answer the question of whether multispectral images are suitable enough to be used in the spectral unmixing task. For this specific application, we explore spectral unmixing of an oil painting to obtain pigment maps. We observe that the performance of the multispectral imaging system in the pigment unmixing task is significantly influenced by two key factors: the number of bands in the multispectral imaging system and the spectral range covered by these bands in relation to the spectral features of the pigments present in the spectral library.

10:30 – 11:00
Coffee BREAK
LATE BREAKING POSTERS
11:00 – 12:30

  • Investigating visual and tactile perceptions of garments for virtual environments, Molly Talbot, Kaida Xiao, and Ningtao Mao, University of Leeds (UK)
  • Palette-based Color Harmonization via Color Naming, Danna Xue1,2, Javier Vazquez-Corral1, Luis Herranz3, Yanning Zhang2, and and Michael S. Brown4; 1R1Universitat Autonoma de Barcelona (Spain), 2Northwestern Polytechnical University (China), 31Universitat Autonoma de Madrid (Spain), and 4York University (Canada)
  • A comprehensive understanding of the tactile properties using fabric images, videos, and real fabrics, Qinyuan Li, Kaida Xiao, and Ningtao Mao, University of Leeds (UK)
  • Robust estimation of exposure ratios in multi-exposure stacks, Param Hanji and Rafal Mantiuk, University of Cambridge (UK)
  • 3D Avatar Generation using Diffusion Models Prior, Fei Yin and Rafal Mantiuk, University of Cambridge (UK)
  • The Extended Planckian Locus, E. Daneshvar and Graham D. Finlayson, University of East Anglia (UK)
  • HDR Image Deglaring via MTF Inversion with Enhanced Low-Frequency Characterisation, Alejandro Sztrajman, Hongyun Gao, and Rafał Mantiuk, University of Cambridge (UK)
  • User-preference towards HDR tone-mapping strategies of smartphone manufacturers, Pooshpanjan Roy Biswas, Thibault Cabana, and Adrien Carmone, DXOMARK (France)
  • Snapshot imaging for VIS-NIR hyperspectral point clouds, Kenton Kwok, Living Optics (UK)
  • The Application of the Matrix-R Post-Processing Method for Pan-Sharpening, Abdullah Kucuk and Graham D. Finlayson, University of East Anglia (UK)

12:30 – 13:30
LUNCH BREAK
FOCAL TALK IV
13:30 – 14:00
Session Chairs: Michael Brown, York University, and Chaker Larabi, University of Poitiers
Reconstruction of Colors in Underwater Scenes: Challenges and Opportunities, Derya Akkaynak, University of Haifa (Israel)

Abstract: We are now at a point where for every consumer camera, several third-party underwater housings are available off-the shelf. Marine scientists can collect underwater images and video faster than was ever possible before. Yet, this large-scale imagery still cannot be analyzed efficiently and timely to provide scientists with the insights needed to follow the fate of our declining ocean ecosystems. In this talk, I will describe the challenges that remain regarding accurate and consistent reconstruction of colors in underwater scenes, and discuss the bottlenecks preventing underwater computer vision from achieving the progress and performance in-air computer vision has enjoyed in the last decade.

Computational imaging and image processing
14:00 – 15:00
Session Chair: Benoit Pochon, DXOMARK Image Labs (France)
14:00
Optimal Filter Shape for Convolution-based Image Lightness Processing D. Andrew Rowlands and Graham D. Finlayson, University of East Anglia (UK)

Abstract: In the convolutional retinex approach to image lightness processing, a captured image is processed by a centre/surround filter that is designed to mitigate the effects of shading (illumination gradients), which in turn compresses the dynamic range. Recently, an optimisation approach to convolutional retinex has been introduced that outputs a convolution filter that is optimal (in the least squares sense) when the shading and albedo autocorrelation statistics are known or can be estimated. Although the method uses closed-form expressions for the autocorrelation matrices, the optimal filter has so far been calculated numerically. In this paper, we parameterise the filter, and for a simple shading model we show that the optimal filter takes the form of a cosine function. This important finding suggests that, in general, the optimal filter shape directly depends upon the functional form assumed for the shadings.

14:15
Improving Accuracy of Color Reproduction on Mobile Displays, Eric Kirchner1, Lan Njo1, Esther Perales2, Aurora Larrosa Navarro2, Carmen Vázquez2, Ivo van der Lans1, and Peter Spiers1; 1AkzoNobel Paintings and Coatings (the Netherlands) abd 2University of Alicante (Spain)

Abstract: When visualizing colors on websites or in apps, color calibration is not feasible for consumer smartphones and tablets. The vast majority of consumers do not have the time, equipment or expertise to conduct color calibration. For such situations we recently developed the MDCIM (Mobile Display Characterization and Illumination Model) model. Using optics-based image processing it aims at improving digital color representation as assessed by human observers. It takes into account display-specific parameters and local lighting conditions.
In previous publications we determined model parameters for four mobile displays: an OLED display in Samsung Galaxy S4, and 3 LCD displays: iPad Air 2 and the iPad models from 2017 and 2018. Here, we investigate the performance of another OLED display, the iPhone XS Max. Using a psychophysical experiment, we show that colors generated by the MDCIM method are visually perceived as a much better color match with physical samples than when the default method is used, which is based on sRGB space and the color management system implemented by the smartphone manufacturer. The percentage of reasonable to good color matches improves from 3.1% to 85.9% by using MDCIM method, while the percentage of incorrect color matches drops from 83.8% to 3.6%.

14:30
A Sparkle-rendering Model Based on Metrological Parameter, Aurora Larrosa Navarro1, Julián Espinosa1,Alejandro Ferrero2, Nina Basic3, Néstor Tejedor1, and Esther Perales1; 1University of Alicante (Spain), 2Spanish National Research Council (Spain), and 3Federal Institute of Metrology (METAS) (Switzerland)

Abstract: E-commerce has become the primary global shopping method, but the inability to physically inspect products presents challenges for consumers. This study focuses on the sparkle texture effect, significant in various industries. Evaluation tools are limited to two instruments, leading the International Commission on Illumination (CIE) to work on establishing measurement scales. The study proposes a rendering model sparkle utilising a metrologic scale based on the luminous point density and visibility probability distribution, by assuming a half-Gaussian shape which should be fitted to measurement data in order to obtain parameters μ and σ. The model algorithm was computed for 25 samples across three different geometries (15º:0º, 45º:0º and 75º:0º). The maximum deviation between measurements and the fitted function was found to be 0.09, indicating negligible discrepancies in terms of cumulative probability. The analysis revealed that μ tends to approach zero for all samples, while σ showed a correlation with the density of sparkle points dS, with a Pearson correlation coefficient exceeding 0.91 for all geometries, indicating a strong relationship between the two variables. A preliminary rendering is obtained, using Mobile Display Characterisation and Illumination Model (MDCIM) for the background colour.

14:45
Even Simpler Tone Curves, James Bennett and Graham Finlayson, University of East Anglia (UK)

Abstract: Abstractly, a tone curve can be thought of as an increasing function of input brightness which, when applied to an image, results in a rendered output that is ready for display and is preferred. However, the shape of the tone curve is not arbitrary. Curves that are too steep or too shallow (which concomitantly result in too much or too little contrast) are not preferred. Thus, tone curve generation algorithms often constrain the shape of the tone curves they generate. Recently, it was argued that tone curves should—as well as being limited in their slopes—only have one or zero inflexion points.
In this paper, we propose that this inflexion-point requirement should be strengthened further. Indeed, the single inflexion-point-only constraint still admits curves with sharp changes in slope (which are sometimes the culprits of banding artefacts in images). Thus, we develop a novel optimisation framework which additionally ensures sharp changes in the tone curves are smoothed out (technically, mollified). Our even simpler tone curves are shown to render most real images to be visually similar to those rendered without the constraints. Experiments validate our method.

15:00 – 15:30
Coffee Break
Closing Keynote
15:30 – 16:30
Session Chairs: Michael Brown, York University, and Chaker Larabi, University of Poitiers
PHOTO-ÆSTHETICS: How Photographers Think, Michael Freeman, photographer (UK)

Abstract: Through smartphones and social media, photography has spread so far into people’s lives that it has become an important industry and area for study. Technical excellence in optics, sensors, shutters and settings has now largely been achieved. Lagging far behind is an understanding of how and why images work aesthetically, and how they might be improved. This talk will give a look into the ways in which professional and other committed photographers attempt to make images effective (through defining the subject, framing and organising the image, relating parts of the scene to each other, choosing lighting, balancing information and expression, and rendering).

Best Paper Award Presentation and Closing Remarks

 

No content found

No content found

No content found

No content found

No content found