09:30
Invited Talk: Keeping up appearances, Susanne Klein, EPSRC Manufacturing Fellow, Centre for Fine Print Research, UWE, Bristol
(UK)
Abstract: Appearance, definition from the New Shorter English Dictionary: 1. The action of coming into view or becoming visible. 2. The action of appearing formally at any proceedings. 3. The action or state of seeming or appearing to be. 4. State or form as perceived. 5. Outward show or aspect. 6. A phenomenon, an apparition. 7. A gathering of people. 8. The action or an instance of coming before the world.
From the definition, it can be understood that appearance is always a show with an audience. Without the viewer, appearance does not exist. What does it mean for ‘Material Appearance’? In this lecture I would like to explore how in portraits, as examples of an outward show coming before the world, the recording and the reproduction of appearance relies on shared knowledge. Is appearance in the eye of the beholder? What shortcuts can be taken? What misinterpretations will happen when the cultural background of the audience is different from the people who have recorded and recreated appearance. The case studies will come from different continents and different eras.
Appearance in 3D
11:00 – 12:10 London
Session Chair: Davide Deganello, Swansea University (UK)
11:00
Focal Talk: TBD, Davide Deganello, professor, Mechanical Engineering, Swansea University (UK)
Abstract: Coming soon.
11:30
JIST-first Digital pre-distorted one-step phase retrieval algorithm for real-time hologram generation for holographic displays, Jinze Sha, Ada Goldney, Andrew Kadis, Jana Skimewskaja, and Timothy Wilkinson, University of Cambridge (UK)
Abstract: In a computer-generated holographic projection system, the image is reconstructed via the diffraction of light from a spatial light modulator. In this process, several factors could contribute to non-linearities between the reconstruction and the target image. This paper evaluates the non-linearity of the overall holographic projection system experimentally, using binary phase holograms computed using the one-step phase retrieval (OSPR) algorithm, and then applies a digital pre-distortion (DPD) method to correct for the non-linearity. Both a notable increase in reconstruction quality and a significant reduction in mean squared error were observed, proving the effectiveness of the proposed DPD-OSPR algorithm.
11:50
Developing color characterization models for 3D Printer, Ruili He, University of Leeds (UK)
Abstract: In this study, the third order polynomial regression (PR) and deep neural networks (DNN) were used to perform color characterization from CMYK to CIELAB color space, based on a dataset consisting of 2016 color samples which were produced using a Stratasys J750 3D color printer. Five output variables including CIE XYZ, the logarithm of CIE XYZ, CIELAB, spectra reflectance and the principal components of spectra were compared for the performance of printer color characterization. The 10-fold cross validation was used to evaluate the accuracy of the models developed using different approaches, and CIELAB color differences were calculated with D65 illuminant. In addition, the effect of different training data sizes on predictive accuracy was investigated. The results showed that the DNN method produced much smaller color differences than the PR method, but it is highly dependent on the amount of training data. In addition, the logarithm of CIE XYZ as the output provided higher accuracy than CIE XYZ.
Day 2 Interactive Paper Previews
12:10 – 12:30 London
Session Chair: Adytia Sole, NTNU (Norway)
Facial redness perception based on realistic skin models, Yan Lu and Kaida Xiao,Zheng Li, University of Leeds (UK)
Abstract: Facial redness is an important perceptual attribute that receives many concerns from application fields such as dermatology and cosmetics. Existing studies have commonly used the average CIELAB a* value of the facial skin area to represent the overall facial redness. Yet, the perception of facial redness has never been precisely examined. This research was designed to quantify the perception of facial redness and meanwhile investigate the perceptual difference between the faces and the uniform patches. Eighty images of real human faces and uniform skin colour patches were scaled in terms of their perceived redness by a panel of observers. The results showed that the CIELAB a* was not a good predictor of facial redness since the perceived redness was also affected by the L* and b* values. A new index, RIS was developed to accurately quantify the perception of facial skin redness, which promised a much higher accuracy (R2 = 0.874) than the a* value (R2 = 0.461). The perceptual difference between facial redness and patch redness was also discussed.
Does Motion Increase Perceived Magnitude of Translucency?, Davit Gigilashvili, David Norman Díaz Estrada, and Lakshay Jain, NTNU (Norway)
Abstract: The visual mechanisms behind our ability to distinguish translucent and opaque materials is not fully understood. Disentanglement of the contributions of surface reflectance and subsurface
light transport to the still image structure is an ill-posed problem. While the overwhelming majority of the works addressing translucency perception use static stimuli, behavioral studies
show that human observers tend to move objects to assess their translucency. Therefore, we hypothesize that translucent objects appear more translucent and less opaque when observed in motion than when shown as still images. In this manuscript, we report two psychophysical experiments that we conducted using static and dynamic visual stimuli to investigate how motion affects perceived translucency.
Color change of printed surfaces due to a clear coating with matte finishing Fanny Dailliez1, Mathieu Hebert 2, Lionel Simonot 3, Lionel Chagas1, Anne Blayo1, and Thierry Fournel 4; 1LGP2 (France), 2Université Jean Monnet (France), 3Institut Pprime (France), and 4University of Saint-Etienne (France)
Abstract: When a clear layer is coated on a diffusing background, light is reflected multiple times within the transparent layer between the background and the air-layer interface. If the background is lit in one point, the angular distribution of the scattered light and Fresnel’s angular reflectance of the interface induce a specific irradiance pattern on the diffuser: a ring-like halo. In the case where the background is not homogenously colored, e.g. a half-tone print, the multiple reflection process induces multiple con-volutions between the ring-like halo and the halftone pattern, which increases the probability for light to meet differently col-ored areas of the background and thus induces a color change of the print. This phenomenon, recently studied in the case of a smooth layer surface (glossy finishing) is extended here to rough surface layer (matte finishing) in order to see the impact of the surface roughness on the ring-like halo, and thereby on the print color change. A microfacet-based bi-directional reflectance dis-tribution function (BRDF) model is used to predict the irradi-ance pattern on the background, and physical experiments have been carried out for verification. They show that the irradiance pattern in the case of a rough surface is still a ring-like halo, and that the print color change is similar to the one observed with a smooth interface, by discarding the in-surface reflections which can induce additional color change.
LEDSimulator technology: A research tool for color and texture, Jinyi Lin and Ming Ronnier Luo, Zhejiang University (China)
Abstract: This paper describes LEDSimulator, a system that exhibits the impact of texture on colour appearance and serves as a colour communication tool for supply chain management.
LEDSimulator is capable of accurately displaying coloured textures, achieving successful colour reproduction between media, and expediting the production cycle. The key technologies that accomplish this are introduced here, including: 1) visual colour matching on textures, 2) projector characterization modeling using the conventional and an advanced reduced LUT approach, and 3) a model to achieve metameric cross-media reproduction.
Wide-field gloss scanner designed to assess appearance and condition of modern paintings, Mathieu Hebert1, Pauline Hélou De la Grandière2, Yann Pozzi1, Mathieu Thoury3, and Lionel Simonot4; 1Université Jean Monnet (France),2CY Cergy Paris Université (France), 3IPANEMA (France), and 4Institut Pprime (France)
Abstract: When one seeks to characterize the appearance of art paint-ings, color is the visual attribute that usually focuses most atten-tion: not only does color predominate in the reading of the pic-torial work, but it is also the attribute that we best know how to evaluate scientifically, thanks to spectrophotometers or imaging systems that have become portable and affordable, and thanks to the CIE color appearance models that allow us to convert the measured physical data into quantified visual values. However, for some modern paintings, the expression of the painter relies at least as much on gloss as on color; Pierre Soulages (1919-2022) is an exemplary case. This complicates considerably the characterization of the appearance of the paintings because the scientific definition of gloss, its link with measurable light quan-tities and the measurement of these light quantities over a whole painting are much less established than for color. This paper re-ports on the knowledge, challenges and difficulties of characterizing the gloss of painted works, by outlining the track of an im-aging system to achieve this.
Influence of the hue of absorption pigments to graininess perception, Esther E Perales1, Alejandro Ferrero2, Julián Espinosa1, Jorge Pérez1, Mercedes Gutiérrez1, Marjetka Milosevic3, and Juan Carlos Fernández-Becáres3; 1Universidad de Alicante (Spain), 2Consejo Superior de Investigaciones Científicas (Spain), and 3PPG Ibérica (Spain)
Abstract: Valid and traceable instrumental measurements of all the visual attributes that characterize the appearance of a material (color, gloss, texture and translucency) are necessary to ensure good product quality control. The objective of this work is to evaluate the visual attribute of texture associated with special effect pigments in order to be able to establish a measurement scale. In particular, this study evaluates the influence of the hue of absorption pigments on the perception of graininess. For this purpose, nine samples with a systematic variation of hue angle were used. A visual experiment based on the comparison of triplets was designed, and a multidimensional scaling (MDS) analysis was applied to obtain relative values of perceived graininess. The results confirm that the hue angle of the absorption pigments does not influence the perception of graininess.
Advancing material appearance measurement: A cost-effective multispectral imaging system for capturing SVBRDF and BTF, Majid Ansari-Asl1, Markus Barbieri2, Gael Obein3, and Jon Yngve Hardeberg1; 1NTNU (Norway), 2Barbieri Electronic, 3CNAM (France)
Abstract: This paper introduces a novel system for measuring the appearance of materials by capturing their reflectance represented by Spatially Varying Bidirectional Reflectance Distribution Function (SVBRDF) and Bidirectional Texture Function (BTF). Inspired by goniospectrophotometers, our system uses a fully-aligned and motorized turntable that rotates the sample around three axes to scan the entire hemispherical range of incident-reflection directions. The camera remains fixed while the light source can be rotated around one axis providing the fourth degree of freedom. To ensure high precision color measurement and spectral reproduction for reliable relighting purposes, we use a high-resolution multispectral camera and a broadband LED light source. We provide an overview of our instrument in this paper, and discuss its limitations to be addressed in the future works.
Depth perception assessment for 3D display using real time controllable random dot stereogram, Young-sang Ha, Rang-kyun Mok, and Beom-shik Kim, Samsung Display (Republic of Korea)
Abstract: This paper proposes a method that can subjectively evaluate the actual depth range of 3D display. It presents the positive and negative depth of 3D display using visual stimulation images such as random dot stereogram (RDS). Develop a system that allows subjects to control the depth range of RDS images in real time to increase evaluation accuracy. Through this, the subject evaluates the clarity of the image form and the permissible level of recognition of the stereoscopic image in the depth range. We can finally determine the depth range of the 3D display using the acquired cognitive evaluation result. Finally, the depth enhancement according to the light field display (LFD) experimental conditions is quantified using a statistical analysis method called T-test. This experimental method can be a successful approach to developing a 3D stereoscopic evaluation system and producing 3D content that affects perceptual factors.
12:40 – 13:40
LUNCH BREAK
Interdisciplinary 2
13:40 – 14:50 London
Session Chair: Belen Masia, Universidad de Zaragoza (Spain)
13:40
Focal Talk: TBD, Belen Masia, associate professor, Computer Science Department, Universidad de Zaragoza (Spain)
Abstract: Coming soon.
14:10
Optimizing Gabor texture features for materials recognition by convolutional neural networks, Raimondo Schettini1; Paolo Napoletano1; Claudio Cusano2; Francesco Bianconi3; 1University of Milano - Bicocca (Italy), 2University of Pavia (Italy), and 3Università degli Studi di Perugia (Italy)
Abstract: In this paper, we present a novel technique that allows for customized Gabor texture features by leveraging deep learning neural networks. Our method involves using a Convolutional Neural Network to refactor traditional, hand-designed filters on specific datasets. The refactored filters can be used in an off-the-shelf manner with the same computational cost but significantly improved accuracy for material recognition. We demonstrate the effectiveness of our approach by reporting a gain in discrimination accuracy on different material datasets. Our technique is particularly appealing in situations where the use of the entire CNN would be inadequate, such as analyzing non-square images or performing segmentation tasks. Overall, our approach provides a powerful tool for improving the accuracy of material recognition tasks while retaining the advantages of handcrafted filters.
14:30
Color appearance of iridescent objects, Katja Doerschner1, Robert Ennis1, Philipp Börner1, Frank J. Maile2, and Karl R. Gegenfurtner1; 1Justus Liebig University (Germany) and 2Schlenk Metallic Pigments, GmbH (Germany)
Abstract: Iridescent objects and animals are quite mesmerizing to look at, since they feature multiple intense colors, whose distribution can vary quite dramatically as a function of viewing angle. These properties make them a particularly interesting and unique stimulus to experimentally investigate the factors that contribute to single color impressions of multi-colored objects.
Our stimuli were 3D printed shapes of varying complexity that were coated with three different types of iridescent paint. For each shape-color combination, participants performed single and multi-color matches for different views of the stationary object, as well as single color matches for a corresponding rotating stimulus. In the multi-color matching task, participants subsequently rated the size of the surface area on the object that was covered by the match-identified color. Results show that single-color appearance of iridescent objects varied with shape complexity, view, and object motion. Moreover, hue similarity of color settings in the multi-color match task best predicted single-color appearance, however this predictor was weaker for predicting single color matches in the motion condition. Taken together our findings suggest that the single-color appearance of iridescent objects may be modulated by chromatic factors, spatial-relations and the characteristic dynamics of color changes that are typical for this type of material.
14:50 – 16:00
Posters and Coffee
Closing Keynote
16:00 – 17:00 London
Session Chair: Marina Bloj and Lionel Simonot
16:00
Computational imaging for realistic appearance capture, Abhijeet Ghosh, professor of Graphics and Imaging, Department of Computing, Imperial College London
(UK)
Abstract: This talk provides an overview of the research we have been conducting in the Realistic Graphics and Imaging group at Imperial College London and at Lumirithmic (Imperial spin-out) on measurement based appearance modeling for realistic computer graphics. The talk spans practical techniques for both material and facial appearance capture and techniques for diffuse-specular separation of reflectance. The first part of the talk covers our work on acquiring shape and reflectance of planar material samples. This includes free-form hand-held capture using a mobile device, as well as exploiting polarization imaging, and also resolving materials exhibiting iridescence due to surface diffraction. The second part focuses on computational illumination for high-quality facial appearance capture, and here I cover some previous work on using specialized Light Stages and its impact in film VFX (at USC-ICT), as well as a novel desktop-based high-quality facial capture system developed at Lumirithmic.
17:00
Wrap up and best paper award. Announcement of LIM 2024