Skip to content

March 18, 2009

5

iPhone Color-Blindness Correction

by brainoids

It has always amazed me how poorly-served the color-blind population is when it comes to automated, seamless solutions and adaptations to their condition in the digital and print worlds.   I first realized that color-blindness was nowhere near as rare as I had thought it to be while working as a scientist and making very heavy use of color-encoding in my data visualizations; this was an effective way to learn that a full 2% of the population are dichromats, and around 5% have some form of color blindness.

The details of the visualizations below aren’t terribly important; suffice it to say that what the viewer needs to “get” is that different colors (in this case, types of cloud/storm vertical profiles) live coherently in different areas of the data space (here’s the full writeup).   Deuteranopes are significantly disadvantaged when I show them this chart.

One of my visualizations in full color

One of my visualizations in full color

The same image with simulated deuteranope (red/green) color deficit.

The same image with simulated deuteranope (red/green) color deficit.

What is particularly perplexing is that automated solutions should be fairly trivial.   Mathematically, this is just a problem of finding an optimal transformation (mapping) from a three-dimensional data space to a two-dimensional data-space (for dichromacy), allowing for some adjustable parameters based on a particular individuals type or degree of color blindness.   Not hard, and easily achievable in real-time given current computing power. 

These folks have some rudimentary “Daltonize” algorithms assembled together on a web page, as well as a Photoshop filter, which illustrate what the results could be (there’s also a Firefox extension to test/simulate).  Interestingly, the simple algorithm above doesn’t do terribly well on my test cases above; I think this is because they, by design, arrange “related” colors next to each other, so the Daltonize algorithm’s ability to boost contrast is overridden by the brain’s desire to overlay coherence.

Great opportunity for:

  1. A real-time, iPhone-based color correction “color loupe”: simply a tap into the iPhone’s onboard camera viewfinder, with some back-end real-time color correction.   At minimum, this would help color-blind individuals with what (I assume) are all those frustrating instances (not just scientific data visualization) where data are encoded into colors.   Based on near-term iPhone usage (17m sold through Dec 08), and the stats above, this is something like a 250,000 – 1,000,000 person market.   Nice return for a $0.99-$1.99 price point app.
  2. Apple to get on the game, and get an edge with this 5% of the Windows user base, with native, real-time Mac color correction (including support for print correction via ColorSync).   Actually, given how forward-looking they have been with Universal Access, it’s just plain mystifying that they haven’t hit this niche by now.  (Note: there appears to be a Linux/GNOME applet which provides some type of filtering.)

What am I missing?   Has someone snapped up all the patents and squirreled them away, blocking any kind of systematic solutions?   This is way overdue…

Bookmark and Share

Advertisements
5 Comments Post a comment
  1. Sep 29 2010

    Very interesting.

    I’m an android programmer and have been working on colour mapping as part of my application Colour Bam! which names colours. It’s really designed as a tool for people choosing paint/furniture/clothing colours & artists. However your idea is a really interesting one.

    Firstly I should say: real time image processing is not going to be easy. At least on Android… and I don’t think iPhone will work much better. There are lots of apps which use the “preview” function of the camera. I think it is rather harder to get a live stream of data to process. I’ll having to look into the APIs. Image processing can be pretty CPU intensive, but hey with 1GhZ and rising… should be possible. I don’t know of any apps that really do this sort of image processing at the moment. Most of the augmented reality stuff just overlays images on top of the standard camera preview.

    Secondly… the mapping stuff (daltonization) I find really interesting. I can’t see how it would be possible to perfectly preserve all the informationin a spectrum containing all the colours… because red and green have to be mapped to something, so they’re going to have to be mapped to some colour that is already represneted in that spectrum. However I can see how in most cases it can be really effective.

    Thanks for the interesting post… it’s got me thinking!

    ps. Your “full writeup” link took me to something on tropical climate… suggest you check that!

    Reply
    • Oct 17 2010

      Thanks very much for the comments! Sorry the reply was long in coming… yes, now that I’m playing with some of the augmented reality apps I see your point about real-time processing. On the one hand (and I’m straying into an area I don’t know!) I would think a simple transformation from RGB to something else should be low horsepower, but maybe I’m assuming there’s some sort of GPU to offload that to, which may not be the case with phones. But someday, someday!

      Yes too on the daltonization issue … no matter what, information will be lost in mapping down. The question I think is, can the mapping be done in such as way as to maximize perceptual “contrast” in the remaining color space. The goal would be to increase the likelihood that the viewer saw red and green as something “different” from each other, even if they couldn’t necessarily say which was which. I.e., if there are a finite number of effective color regions in a full 3-space definition, is it possible to partition out a reduced 2-space in such a way as to recreate a comparable number of color regions. (I think the answer in many cases would be yes since many scenes seem to be made up of a much smaller number of actual colors than a full 3 dimensions are needed to quantize.) So long as the transformation was adaptive (i.e., didn’t need one single transformation to work for every possible scene), this seems intuitively possible…

      Reply
  2. Eric
    Oct 2 2011

    An application exists on the Android market:
    https://market.android.com/details?id=fr.nghs.android.cbs.enhancer

    Reply
  3. Apr 12 2012

    Hi guys,
    I’ve read your post and comments, and I’d be glad if you could test or have a look to these iPhone iPad application I’ve developed with my brother recently.

    It has a daltonizer algorithm that enhances the vision of Protanopia, Deuteranopia and Tritanopia. We called it iDaltonizer, or eye-daltonizer 🙂 The website is http://www.idaltonizer.com, if you are interested.

    Every comment, idea, criticism, and so on are welcome!

    Reply

Trackbacks & Pingbacks

  1. iPhone Color-Blindness Correction, Redux « Dennis' Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Note: HTML is allowed. Your email address will never be published.

Subscribe to comments

%d bloggers like this: