No content found

IMPORTANT DATES
 Call for Papers
 
  » Journal-first (JIST or JPI) 5 June
  » Conference 28 June
 Acceptance Notification   
  » Journal-first (JIST or JPI)
mid-July
  » Journal-first (JIST or JPI) mid-Aug
 Registration Opens early Sept
 Final Manuscripts Due

  » Journal-first 12 Sept
  » Conference 4 Oct
  Early Registration Ends
17 Oct
 Technical Sessions Begin
Nov 1
   

Learn the Latest About Color

CIC29 offers high-quality short courses, workshops, keynote talks, and technical papers program the event is known for, along with ways to interact with colleagues and friends.

We know many of you are missing the interaction that is a natural part of in-person meetings. While we know it is not exactly the same, this year we've planned four interactive social events to help fill that void and allow you to meet new colleagues while having fun and learning; see the program for details. 1. IceBreaker, 2. Conundrums, 3. Workshops, 4. Color Combat

We've also got two fun online contests planned: 1. CIC29 Bingo and 2. Color Scientist meets a Bartender/Barista... 

MONDAY 1 NOVEMBER 2021

WELCOME AND OPENING KEYNOTE

10:00 – 11:10

Some of the Interesting Challenges of Developing the HoloLens Sensor Array, and Some Fun and Hard Imaging Problems Ahead, Andy Goris, Formerly Microsoft HoloLens, HP Camera Division, and HP Computer Graphics Lab (US)

The HoloLens augmented reality computer has nine cameras of five types, including monochrome visible light, infrared, and color. These cameras sample the user's motion, gaze, and hands, as well as the world the user works in. The color camera sends video to remote people working in real time with the HoloLens wearer. The first part of this talk describes a few of the interesting problems developing these cameras. The second part cover some new and hard problems ahead for cameras and image processing that will require machine learning, a deeper understanding of information theory, and an understanding of the human visual experience below the conscious level.

ONLINE ICEBREAKER

11:10 – 11:50

Looking to meet someone new? Mix and mingle at the CIC ice breaker!
After brief introductions, attendees answer the following questions as they wish and see where the conversation leads. What’s the most fun color-related experience you’ve had? What color-related paper or book you’ve read has impressed or inspired you most? What is your pet peeve about commonly-stated color “facts”?

CAMERA COLOR

11:50 – 12:50

11:50

Designing a Color Filter with Transmittance Constraints for Improving the Color Accuracy of Digital Cameras, Yuteng Zhu and Graham Finlayson, University of East Anglia (UK)

12:10

GamutNet: Restoring Wide-gamut Colors for Camera-captured Images, Hoang Le1, Taehong Jeong2, Abdelrahman Abdelhamed3, Hyun Joon Shin4, and Michael Brown1; 1York University (Canada), 2MAXST (Republic of Korea), 3Samsung (Canada), and 4Ajou University (Republic of Korea)

12:30

The Discrete Cosine Maximum Ignorance Assumption, Graham D. Finlayson1, Javier Vazquez-Corral2, and Fufu Fang1; 1University of East Anglia (UK) and 2Computer Vision Center / Universitat Autònoma de Barcelona (Spain)

12:50 – 13:20 Session Break / Posters Available for Viewing
Meet with others in Gather for discussions with colleagues and speakers.

SPECTRA

13:20 – 14:20

13:20

Investigating the Upper-bound Performance of Sparse-coding-based Spectral Reconstruction from RGB Images, Yi-Tun Lin and Graham Finlayson, University of East Anglia (UK)

13:40

Spectral-reflectance Estimation under Multiple Light Sources, Shoji Tominaga, Norwegian University of Science and Technology / Nagano University (Japan)

14:00

Investigating the Kokhanovsky Snow Reflectance Model in Close-range Spectral Imaging, Mathieu Nguyen, Jean-Baptiste Thomas, and Ivar Farup, Norwegian University of Science and Technology (Norway)

Break in program to accommodate time zones

IMPROVING DISPLAYS

19:00 – 20:10

19:00

Welcome

19:10

Methods to Improve Colour Mismatch between Displays, Keyu Shi and Ming Ronnier Luo, Zhejiang University (China)

19:30

Effects of Display and Ambient Illuminance on Visual Comfort for Reading on a Mobile Device, Yu Liu and Ming Ronnier Luo, Zhejiang University (China)

19:50

JIST-first Preliminary Result on the Direct Assessment of Perceptible Simultaneous Luminance Dynamic Range, Fu Jiang and Mark D. Fairchild, Rochester Institute of Technology (US)

20:10 – 20:30 Session Break / Posters Available for Viewing
Meet with others in Gather for discussions with colleagues and speakers.

INTERPRETATIONs

20:30 – 21:10

20:30

Color Layer Scissioning in See-through Augmented Reality, Tucker Downs and Michael Murdoch, Rochester Institute of Technology (US)

20:50

Visual Perception of Surface Properties through Manipulation, James Ferwerda and Snehal Padhye, Rochester Institute of Technology (US)

TUESDAY 2 NOVEMBER 2021

IMAGE QUALITY

08:30 – 09:40

08:30

Welcome

08:40

Image Enhancement for Colour Deficiency via Gamut Mapping, lihao Xu, Hangzhou Dianzi University, and Ming Ronnier Luo, Zhejiang University (China)

09:00

The Development of Three Image Quality Evaluation Metrics based on a Comprehensive Dataset, Dalin Tian, Muhammad Usman Khan, and Ming Ronnier Luo, Zhejiang University (China)

09:20

How Good is Too Good? A Subjective Study on Over Enhancing Images, Sahar Azimian and Farah Torkamani Azar, Shahid Beheshti University (Iran); and Seyed Ali Amirshahi, Norwegian University of Science and Technology (Norway)

09:40 – 10:00 Session Break / Posters Available for Viewing
Meet with others in Gather for discussions with colleagues and speakers.

TUESDAY KEYNOTE AND IS&T AWARDS

10:00 – 11:00

Scene-referred Gamut Compression in ACES, Carol Payne, Netflix (US); Matthias Scharfenberg, Industrial Light & Magic (Canada); and Nick Shaw, Antler post (UK)

The Academy Color Encoding System (ACES) is an open-source color management framework used in film and TV production. One barrier to wider adoption of ACES has been its handling of values outside the working gamut.

Gamut mapping is a reasonably well-defined problem space in the context of known display and viewing conditions. However, when the image state is scene-referred, "traditional" gamut mapping approaches may not apply. This presentation walks through the research, development, testing, and implementation of the Academy Color Encoding System (ACES) Gamut Compression algorithm—the solution developed to fix out-of-gamut pixel values in a scene-referred spaces.

 

11:00 – 11:30 Session Break / Posters Available for Viewing
Meet with others in Gather for discussions with colleagues and speakers.

COLOR CONTRAST

11:30 – 12:10

11:30

JPI-first: Deep Encoding with Colour Opponency, Arash Akbarinia and Raquel Gil-Rodriguez, Justus-Liebig University (Germany)

11:50

Modeling Chromatic Contrast Sensitivity across Different Background Colors and Luminance, Marcel Lucassen, Dragan Sekulovski, and Marc Lambooij, Signify Research (the Netherlands); and Qiang Xu and Ming Ronnier Luo, Zhejiang University (China)

NOISE

12:10 – 12:50

12:10

Influence of Procedural Noise on the Glossiness of 2.5D Printed Materials, Abigail Trujillo1, Donatela Saric2, Susanne Klein1, and Carinna Parraman1; 1Centre for Fine Print Research UWE (UK) and 2Fogra Research Institute for Media Technology (Germany)

12:30

Real World Metamer Sets: Or How We Came to Love Noise, Peter Morovic, HP Inc. (Spain) and Jan Morovic, HP Inc. (UK)

12:50 – 13:20 Session Break / Posters Available for Viewing
Meet with others in Gather for discussions with colleagues and speakers.

IMPROVING PRINTs

13:20 – 14:20

13:20

Color Printing on Pre-colored Textiles, Peter Morovic1, Jan Morovic2, and Sergio Etchebehere1; 1HP Inc. (Spain) and 2HP Inc. (UK)

13:40

Estimation of BRDF Measurements for Printed Colour Samples, Tanzima Habib, Phil Green, and Peter Nussbaum, Norwegian University of Science and Technology (Norway)

14:00

JIST-first Numerical Pathology in Selected Kubelka-Munk Formulas, and Strategies for Mitigation, J. A. Stephen Viggiano, Rochester Institute of Technology (US)

COLOR CONUNDRUMS I

14:20 – 15:10

Join colleagues for an informal discussion about one of the color-related topics listed below. While each conundrum is led by a facilitator, the goal is for everyone to share their opinions and experiences. Choices during Color Conundrums I are: 

Conundrum A: What is HDR?
Convenor: Timo Kunkel, Dolby Labs
HDR is widely talked about in a multitude of contexts. On the surface, it often seems straight forward to define what HDR is. However, when starting to dig deeper, it is not that simple anymore: Is HDR a special effect in an app? a type of image processing? a graphics technique? a series of images at different exposures? a special camera? a special display?

Conundrum B: Communication of Color
Convenor: Philipp Urban, Fraunhofer Institute for Computer Graphics Research IGD
Is color a separable property of objects? The physiological evidence of color channels in early vision would seem to support this view, however many color phenomena contradict this simple idea. What does this mean for how we measure, characterize, and ultimately communicate color?

Conundrum C: Color and Vision: Beyond the Rainbow
Convenor: James Ferwerda, Rochester Institute of Technology
How do processes other than those based on the spectral properties of light contribute to the perception of color? How do color illusions arise? How do simultaneous contrast, assimilation, induction colors, etc. work? What is the role of expectation? What can these phenomena tell us about color processing at all levels of the visual system?

Break in program to accommodate time zones

TWO-MINUTE INTERACTIVE PAPER PREVIEWS FOLLOWED BY INTERACTIVE PAPER POSTER SESSION A

19:00 – 20:20

A-01 Development of a Three-dimensional Color Rendition Space for Tunable Solid-state Light Sources, Dorukalp Durmus, Pennsylvania State University (US)

A-02 Selection of Optimal External Filters for Colorimetric Cameras, Michael Vrhel, Artifex Software, and H. Joel Trussell, North Carolina State University (US)

A-03 Effect of Digitally Generated Colored Filters on Farnsworth-Munsell 100 Hue Test by Red-green Color Vision-deficient Observers, Shunnma Saito and Keiko Sato, Kagawa University (Japan)

A-04 Highlighted Document Image Classification, Yafei Mao1, Yufang Sun1, Peter Bauer2, Todd Harris2, Mark Shaw2, Lixia Li2, and Jan Allebach1; 1Purdue University and 2HP Inc. (US)

A-05 A Digital Test Chart for Visual Assessment of Color Appearance Scales, Mark Fairchild, Rochester Institute of Technology (US)

A-06 Time Course Chromatic Adaptation under Highly Saturated Illuminants, Hui Fan, Ming Ronnier Luo, and Yuechen Zhu, Zhejiang University (China)

A-07 Models to Predict Naturalness and Image Quality for Images Containing Three Memory Colours: Sky, Grass, and Skin, Jason Ji, Dalin Tian, and Ming Ronnier Luo, Zhejiang University (China)

A-08 New Colour Appearance Scales Under High Dynamic Range Conditions, Xi Lv and Ming Ronnier Luo, Zhejiang University (China)

A-09 Dye Amount Estimation in a Papanicolaou-stained Specimen using Multispectral Imaging, Saori Takeyama, Tomoaki Watanabe, and Masahiro Yamaguchi, Tokyo Institute of Technology; Takumi Urata and Fumikazu Kimura, Shinshu University; and Keiko Ishii, Okaya City Hospital (Japan)

A-10 A New Corresponding Color Dataset Covering a Wide Luminance Range under High Dynamic Range Viewing Condition, Xinye Shi, Yuechen Zhu, and Ming Ronnier Luo, Zhejiang University (China)

A-11 White Appearance for Optimal Text-background Lightness Combination Document Layout on a Tablet Display under Normal Light Levels, Hsin-Pou Huang1, Hung-Chung Li2, Minchen Wei3, and Yu-Cheng Huang1; 1Chihlee University of Technology (Taiwan), 2Chang Gung University of Science and Technology (Taiwan), and 3The Hong Kong Polytechnic University (Hong Kong)

A-12 Preferred White Balance for Skin Tones in Multi-illuminant Scenes, Anku and Susan P. Farnand, Rochester Institute of Technology (US)

JIST-first A-13 Emphasis on Material Appearance by a Combination of Dehazing and Local Visual Contrast, Hiroaki Kotera, Kotera Imaging Laboratory (Japan)

JIST-first A-14 New Encoder Learning for Captioning Heavy Rain Images via Semantic Visual Feature Matching, Chang-Hwan Son and Pung-Hwi Ye, Kunsan National University (Republic of Korea)

JIST-first A-15 Development of a System to Measure the Optical Properties of Facial Skin using a 3D Camera and Projector, Kumiko Kikuchi1, Shoji Tominaga2,3, and Jon Y. Hardeberg2; 1Shiseido Co. Ltd. (Japan), 2Norwegian University of Science and Technology (Norway), and 3Nagano University (Japan)

COLOR CONUNDRUMS II

20:20 – 21:10

Join colleagues for an informal discussion about one of the color-related topics listed below. While each conundrum is led by a facilitator, the goal is for everyone to share their opinions and experiences. Choices during Color Conundrums II are: 

Conundrum D: Color and Color Names Around the World and Through Time
Convenor: Minjung Kim, Facebook Reality Labs
How are colors named across languages or classified in different places? Are there names for particular colors that don’t translate well between languages? Are there color names whose meaning has changed over time, or whose meaning is ambiguous or commonly misunderstood?

Conundrum E: What does industry need from new color scientists?
Convenor: Jerry Jia, Facebook Reality Labs 
What kinds of academic preparation and experience does one need today to succeed in industry as a color and imaging scientist or engineer? What are the most important things academia should be teaching color science/engineering students in order to meet the needs of industry now and for the next 10-20 years?

Conundrum F: What color problem needs to be solved ASAP?
Convenor: Dave Wyble, Avian Rochester, LLC 
What do you think is the most pressing or important color-related problem that needs to be solved now?

WEDNESDAY 3 NOVEMBER 2021

WHITE AND COLOR

08:30 – 09:40

08:30

Welcome

08:40

JIST-first Perception of White for Stimuli with Luminance beyond the Diffuse White, Yiqian Li and Minchen Wei, The Hong Kong Polytechnic University (Hong Kong)

09:00

The Helmholtz-Kohlrausch Effect and Its Impact on Near-white Substrate Colours, Gregory High and Phil Green, Norwegian University of Science and Technology (Norway)

09:20

G0 Revisited as Equally Bright Reference Boundary, Hao Xie and Mark Fairchild, Rochester Institute of Technology (US)

09:40 – 10:00 Session Break / Posters Available for Viewing
Meet with others in Gather for discussions with colleagues and speakers.

CLOSING KEYNOTE

10:00 – 11:00

Learning to Estimate Lighting from a Single Image, Jean-François Lalonde, Université Laval (Canada)

Combining virtual and real visual elements into a single, realistic image requires the accurate estimation of the lighting conditions of the real scene. Unfortunately, doing so typically requires specific capture devices or physical access to the scene. This talk presents approaches that alleviate these restrictions and instead automatically estimate lighting from a single image. In particular, recent works that frame lighting estimation as a learning problem for both the indoor and outdoor illumination scenarios are presented. In both cases, large datasets of omnidirectional HDR images are leveraged for training the models. It will be shown that using our illumination estimates for applications like 3D object insertion can achieve photo-realistic results on a wide variety of challenging scenarios.

11:00 – 11:30Session Break / Posters Available for Viewing
Meet with others in Gather for discussions with colleagues and speakers.

TWO-MINUTE INTERACTIVE PAPER PREVIEWS FOLLOWED BY INTERACTIVE PAPER POSTER SESSION B

11:30 – 13:00

 

B-01 Use of Spectral and Spatial Information for Red Scale Pest Control, Francisco J. Burgos-Fernández1, Carlos E. García-Guerra1, Fernando Díaz-Doutón1, Abel Zaragoza2, Albert Virgili2, and Meritxell Vilaseca1; 1Universitat Politècnica de Catalunya and 2COMERCIAL QUÍMICA MASSÓ, S.A. (Spain)

B-02 Colourlab Image Database: Geometric Distortions, Marius Pedersen and Seyed Ali Amirshahi, Norwegian University of Science and Technology (Norway)

B-03 Reflectance Estimation from Snapshot Multispectral Images Captured under Unknown Illumination, Vlado Kitanovski, Jean-Baptiste Thomas, and Jon Yngve Hardeberg, Norwegian University of Science and Technology (Norway)

B-04 Lippmann Photography: History and Modern Replications of the Elusive Structural Color, Elizabete Kozlovska, Susanne Klein, and Frank Menger, University of the West of England (UK)

B-05 Radiometric Spectral Fusion of VNIR and SWIR Hyperspectral Cameras, Federico Grillini, Jean-Baptiste Thomas, and Sony George, Norwegian University of Science and Technology (Norway)

B-06 Optimising a Euclidean Colour Space Transform Simultaneously for Colour Order and Perceptual Uniformity, Luvin Munish Ragoo and Ivar Farup, Norwegian University of Science and Technology (Norway)

B-07 Joint Demosaicing of Colour and Polarisation from Filter Arrays, Alexandra Spote and Pierre-Jean Lapray, Université de Haute-Alsace (France); and Jean-Baptiste Thomas and Ivar Farup, Norwegian University of Science and Technology (Norway)

B-08 Image-based Goniometric Appearance Characterisation of Bronze Patinas, Yoko Arteaga and Clotilde Boust, Centre of Research and Restoration of the Museums of France (France); Angele Dequier, National Institute of Patrimony (France); and Jon Yngve Hardeberg, Norwegian University of Science and Technology (Norway)

B-09 An Analysis of Spectral Similarity Measures, Mirko Agarla, Simone Bianco, Luigi Celona, and Raimondo Schettini, University of Milano-Bicocca (Italy); and Mikhail Tchobanou, Huawei Technologies Co. Ltd. (Russia)

B-10 Benchmarking Modern Gloss Correlators with Established ISO 2813 Standard and Visual Judgment of Gloss, Donatela Šarić1, Andreas Kraushaar2, Marco Mattuschka3, and Phil Green1; 1Norwegian University of Science and Technology (Norway), 2Fogra Research Institute for Media Technologies (Germany), and 3Vizoo 3D (Germany) 

B-11 Extending the Unmixing Methods to Multispectral Images, Jizhen Cai1, Hermine Chatoux1, Clotilde Boust2, and Alamin Mansouri1; 1University Bourgogne Franche-Comté and 2Le Centre de Recherche et de Restauration des Musees de France (France)

B-12 Estimating Visual Difference between Image Reproductions – Magnitude Estimation by Observers In-person and Online, Gregory High, Peter Nussbaum, and Phil Green, Norwegian University of Science and Technology (Norway)

B-13 Long Range Diffusion with Control of the Directional Differences, Hans Jakob Rivertz and Ali Alsam, Norwegian University of Science and Technology (Norway)

B-14 Perceptual Navigation in Absorption-scattering Space, Davit Gigilashvili1, Philipp Urban1,2, Jean-Baptiste Thomas1, Marius Pedersen1, and Jon Yngve Hardeberg1; 1Norwegian University of Science and Technology (Norway) and 2Fraunhofer Institute for Computer Graphics Research IGD (Germany)

JIST-first B-15 Influence of Acquisition Parameters on Pigment Classification using Hyperspectral Imaging, Dipendra J. Mandal, Sony George, and Marius Pedersen, Norwegian University of Science and Technology (Norway); and Clotilde BoustCenter for Research and Restoration of Museums of France (C2RMF) (France)

JIST-first B-16 The Influence of Wedge Angle, Feedstock Color, and Infill Density on the Color Difference of FDM Objects, Ali Payami Golhin, Are Strandlie, and Philip John Green, Norwegian University of Science and Technology (Norway)

WORKSHOP I

13:00 – 14:00

Elevating the Story: Bridging Arts and Science,
Convener: Shane Mario Ruggieri, Dolby Labs, Inc. (US)

Speakers:
Shane Mario Ruggieri, Dolby Labs, Inc. (US)
Stacey Spears, Spears & Munsil (US)
Joachim Zell, Barco (US)

This workshop explores how color and imaging scientists effectively interact with color creatives to develop technologies and workflows that are ready to elevate image fidelity and creative intent for storytelling.

Shane Mario Ruggieri, CSI, is one of the most experienced Dolby Vision colorists in the world. Whether creating forward looking HDR content, training other colorists, or consulting on Dolby Vision and HDR workflows, he strives to help define the language of HDR storytelling. Ruggieri’s resume includes work for Apple, Dolby, ARRI, Visa, HBO, and Universal Studios. He maintains the most fun he’s had is as the resident “golden eye” (or Guinea pig) for Dolby’s Applied Vision Science Group.

Stacey Spears is the co-creator of the popular Spears & Munsil Benchmark DVD and Blu-ray discs. He has also created content for calibration and test discs from Joe Kane Productions, Anchor Bay Technologies, Datacolor, Marvell, and Microsoft. He wrote for many years on home video topics for the audio/video enthusiast site Secrets of Home Theater and High Fidelity, where he created the “Progressive Scan Shootout” and co-discovered the so-called "chroma bug" in MPEG decoder chips. Spears currently works for one of the leading digital cinema camera manufacturers.

Joachim Zell is head of HDR content workflow at Barco. Prior to that he was vp of technology and imaging science at EFILM/Deluxe where he designed and monitored production workflows from onset production to movie release. He has also worked at Technicolor Thomson and Grass Valley Thomson. Zell is an associate member of the American Society of Cinematographers (ASC); co-chair of the ASC’s MITC Next-Generation Cinema Display committee; and co-produces the ASC “Standard Evaluation Material V2” short. He is ACES project vice chair at the Academy of Motion Picture Arts and Sciences.

Break in program to accommodate time zones

CHANGING APPEARANCE

19:00 – 20:10

19:00

Welcome

19:10

A Study on Memory Colours, Mingkai Cao and Ming Ronnier Luo, Zhejiang University (China)

19:30

Accumulation of Corresponding Colours under Extreme Chromatic Illuminations and Modification of CAM16, Yuechen Zhu and Ming Ronnier Luo, Zhejiang University (China)

19:50

The Threshold of Color Inconstancy, Che Shen and Mark Fairchild, Rochester Institute of Technology (US)

20:10 – 20:30 Session Break / Posters Available for Viewing
Meet with others in Gather for discussions with colleagues and speakers.

describing APPEARANCE

20:30 – 21:10

20:30

Testing Colour Appearance Model based UCS using HDR, WCG and COMBVD Datasets, Qiang Xu, Safdar Muhammad, and Ming Ronnier Luo, Zhejiang University (China)

20:50

Comparison of Remote and In-person Tutorials of Color Appearance Phenomena, Dorukalp Durmus, Pennsylvania State University (US)

THURSDAY 4 NOVEMBER 2021

CLOSING REMARKS AND CIC BEST PAPER AWARDS

10:00 – 10:20

WORKSHOP II

10:20 – 11:50

Color: From Images to Videos,
Conveners: Marco Buzzelli, University of Milano – Bicocca (Italy), and Alain Trémeau, University Jean Monnet, St-Etienne (France)

Speakers:
Marco Buzzelli, University of Milano - Bicocca (Italy)
Mark Fairchild, Rochester Institute of Technology (US)
Shoji Tominaga, Norwegian University of Science and Technology (Norway)
Simone Zini, University of Milano - Bicocca (Italy)

One of the growing challenges the color research community faces is moving from the image to the video domain, across all aspects of color imaging. This workshop brings together experts in the field to discuss techniques taken from traditional color imaging that have been—or could be—extended to videos.

Marco Buzzeli is a postdoctoral fellow at University of Milano – Bicocca whose research focus includes characterization of digital imaging devices and object recognition in complex scenes.

Mark Fairchild is head of the Integrated Sciences Academy at RIT, as well as a professor of color science and the graduate program director for the Munsell Color Science Laboratory.

Shoji Tominaga is a professor at the Norwegian University of Science and Technology and visiting researcher at Nagano University. His research interests include multispectral imaging and material appearance.

11:50 – 12:10Workshop Break

WORKSHOP III

12:10 – 13:10

Color and Architecture: Light Affects Mood, Perception, Wellbeing, and Interaction in Space,
Convener: Timo Kunkel, Dolby Labs, Inc. (US)

Speakers:
David Gill, David Gill Architect (US)
Alstan Jakubiec, University of Toronto (Canada)
Greg Ward, Dolby Labs (US)

How light propagates and fills a space is an essential property of architectural design that strongly influences how we use a space and what emotions we form towards it. Gaining a thorough understanding of the interplay of light with objects and ultimately a human observer is therefore an important aspect, both in research and the actual design process. This workshop discusses several aspects that further this understanding such as the materiality of light and color, how light affects our circadian rhythm, and how we can simulate the impact of light within a space.

David Gill is an architect and educator with more than 20 years of practice and more than 10 years of teaching experience. His interests, both professional and academic, lie in the materiality of architecture: the tectonic, perceptual, and poetic meanings and properties that embody common materials.

Greg Ward is the principal author of the Radiance rendering system used for lighting and daylight design in architecture. His expertise includes reflectance models, high dynamic range image capture and display, image processing, and human perception. He is employed by Dolby Laboratories, and consults for Irystec, Depix, and the Lawrence Berkeley National Laboratory.

Alstan Jakubiec is an assistant professor in the Daniels Faculty of Architecture, Landscape and Design / The School of the Environment at the University of Toronto. His expertise is in the areas of daylight simulation, climate-based annual daylight analysis, visual comfort, occupant behavior, and urban simulation.

13:10 – 13:30 Break

COLOR COMBAT

13:30 – 14:20

Join other attendees to test your knowledge of color trivia. End CIC with a bit of relaxation and fun!

No content found

No content found

KEYNOTE SPEAKERS

Andy Goris, retired, Microsoft Corporation

Some of the Interesting Challenges of Developing the HoloLens Sensor Array, and Some Fun and Hard Imaging Problems Ahead

Abstract: The HoloLens augmented reality computer has nine cameras of five types, including monochrome visible light, infrared, and color. These cameras sample the user’s motion, gaze, and hands, as well as the world the user works in. The color camera sends video to remote people working in real time with the HoloLens wearer. The first part of this talk describes a few of the interesting problems developing these cameras. The second part cover some new and hard problems ahead for cameras and image processing that will require machine learning, a deeper understanding of information theory, and an understanding of the human visual experience below the conscious level.

Andy Goris spent 41 years as an electrical engineer and manager at NASA, Hewlett Packard, and Microsoft in the design of computer graphics, digital cameras, and augmented reality. He spent the last eight years managing Microsoft’s HoloLens sensor team, developing cameras and sensors for head tracking, eye tracking, hand tracking, world mapping, and traditional color capture. Outside work, Goris is an avid bird and wildflower photographer, expanding his interest in color to the natural world.

Carol Payne, Netflix
Matthias Schaftenberg, Industrial Light & Magic
Nick Shaw, Antler Post

Scene-Referred Gamut Compression in ACES

Abstract: The Academy Color Encoding System (ACES) is an open-source color management framework used in film and TV production. One barrier to wider adoption of ACES has been its handling of values outside the working gamut.

Gamut mapping is a reasonably well-defined problem space in the context of known display and viewing conditions. However, when the image state is scene-referred, “traditional” gamut mapping approaches may not apply. This presentation walks through the research, development, testing, and implementation of the Academy Color Encoding System (ACES) Gamut Compression algorithm—the solution developed to fix out-of-gamut pixel values in a scene-referred spaces.

Carol Alynn Payne started her career in visual effects, working for six years at Industrial Light & Magic, most prominently as color & imaging engineer. At ILM, Payne worked on more than 30 films, including The Irishman and Star Wars: The Last Jedi. Payne joined Netflix in 2019 as an imaging technologist, focused on standards of the future and how we can best utilize imaging technology to preserve creative intent. Payne is a Working Group Chair for the Academy Color Encoding System (ACES), as well as a Technical Steering Committee member of OpenColorIO.

Matthias Scharfenberg is a color and image science engineer and software developer with more than 20 years of experience in the film and visual effects industry. After 12 years of working for Double Negative in London, he joined Industrial Light & Magic in 2016. Schaftenberg is a Working Group Chair for the Academy Color Encoding System (ACES) and contributor to the OpenColorIO project.

With a degree in Electronic Engineering, Nick Shaw worked originally as an editor. He now provides consultancy and training on color managed pipelines, through his company, Antler Post, to post and VFX facilities in London and internationally. Shaw is a consultant to the A.M.P.A.S. ACES project, as a mentor on the ACES Central forum, and sitting on one of the Technical Advisory Councils. He is also a contributor to the open-source Colour Science for Python project.

Jean-François Lalonde, Université Laval

Learning to Estimate Lighting from a Single Image

Abstract: Combining virtual and real visual elements into a single, realistic image requires the accurate estimation of the lighting conditions of the real scene. Unfortunately, doing so typically requires specific capture devices or physical access to the scene. This talk presents approaches that alleviate these restrictions and instead automatically estimate lighting from a single image. In particular, recent works that frame lighting estimation as a learning problem for both the indoor and outdoor illumination scenarios are presented. In both cases, large datasets of omnidirectional HDR images are leveraged for training the models. It will be shown that using our illumination estimates for applications like 3D object insertion can achieve photo-realistic results on a wide variety of challenging scenarios.

Jean-François Lalonde is an associate professor in the ECE department at Université Laval in Canada. Previously, he was a post-doctoral associate at Disney Research, Pittsburgh. He received a PhD in robotics from Carnegie Mellon University (2011). His research interests lie at the intersection of computer vision, computer graphics, and machine learning. He explores how physics-based and data-driven techniques can be unified to better understand, interpret, and recreate the richness of our visual world.

No content found

No content found

No content found

No content found

No content found