iPhone 7/7+ Raw Capabilities

iphone6-sample-1

Would an iPhone 7 raw capture have produced more shadow detail, more highlight detail, and less sky noise than this iPhone 6 standard camera shot?

While the internet is flooded with negative articles about the iPhone 7 series and how little new they have to offer (a great way to get clicks, whether you have anything meaningful to say or not), there are, in fact, a number of very interesting new features, especially in the 7+. I will wait to discuss the dual cameras and what they offer for phone photography, as well as the wide gamut P3 colorspace of the new iPhones, until I actually have one in hand (the prudent way to write about any product), but in the meantime I can’t resist commenting on another feature of the new phones, or for that matter other recent iPhones, running iOS 10.

That would be the ability to shoot raw images. Not that the native camera app which Apple supplies (and which accounts for the vast majority of images shot with iPhones) offers such an option; but it is available for third parties to use. Adobe is making a splash by supporting this capability in their Lightroom camera function. But first, lets step back, and think about what raw really means.

Raw means nothing, unless there is more than 8 bits (256 levels) of meaningful data available. So the value of raw functions of any type with iPhones will depend on how much meaningful raw data is actually captured, and made available for use, from these phones.

Experience with DSLRs and mirrorless cameras has shown that ten bits of data is good, and twelve bits is better. But where does such “extra” data show up, since screens often don’t display more then 256 levels per color channel anyways?

It shows up mostly when you make significant adjustments to the file, to open the shadows, or enhance the highlights. And the peculiar way that bit depth in files works, extra bits allows us to keep much more highlight detail, while leaving more bits for further down the range. However, unless the dynamic range captures meaningful data, not noise, in the deep shadows, then the value of that extra depth is questionable.

So what we will be looking for from raw capture as we test the iPhone 7 and 7+ (and iOS 10 with phones from the 6s forward) is the ability to produce more highlight and shadow detail, and the ability to make big density shifts in editing software, without causing “thinness”, which shows up as posterization in one or more zones after the edit has been made.

How will the iPhone 7 series perform in raw mode? These are tiny sensors, which are therefore prone to much more noise, especially in the shadows, and in dim lighting. Perhaps the 7+ with its dual camera functionality will be able to reduce that noise a bit, but  don’t expect  raw capture from the iPhone 7 and 7+ to respond like a recent generation DSLRs when editing. But we can hope that this will provide at least incremental improvement on previous iPhone images.

The real question is whether the improvements by shooting with Lightroom raw, over the standard iPhone camera, is large enough and frequent enough for us to use the Lightroom camera as our default, go-to choice for shooting.

Copyright C. David Tobie

Mac Edition Radio Interview with C. David Tobie & David Saffir

MacEdition

Harris Fogel  caught up with David Saffir and myself at PhotoPlus Expo, and spent an evening discussing color management, as it relates to photography, mobile, and video. Here’s a MacEditionRadio.com audio interview that captures some of that conversation. Thanks, Harris, for an interesting evening, and a very professional interview!

Credits: C. David Tobie, Copyright 2013. Website: CDTobie.com Return to Blog’s Main Page

Time Lapse Photography – Give It a Shot for Free

Time lapse photography consists of a series of still images, taken at intervals, which can be played back as a fast-motion video. Timelapses can be taken with a wide range of cameras, and don’t necessarily require much special equipment. This article will serve as an intro to the basics of low-cost time lapse.

TimelapseHeader

Can I Shoot Time Lapse Without Buying Any Special Hardware?

That really depends on what you own now. In order for images in a time lapse to register correctly with one another, it’s necessary to have your camera stable while you shoot the series. This typically means using a tripod. For DSLRs, this could mean a serious tripod, but for test purposes, or for simple web-grade time lapses, a low cost tripod and camera holder such as the Joby Mpod Mini Stand is fine. Using a stand that only holds your phone horizontally will avoid Vertical Video Syndrome, and assure your videos fit on-screen appropriately.

Image Courtesy of Joby

Image Courtesy of Joby

Can I Shoot Time Lapse Without Buying Any Special Software?

Some cameras have the ability to shoot multiple exposures over time built into the camera firmware, and available in the camera’s menu options. For more flexibility, some type of external timer and trigger system is typically used. Triggertrap’s iPhone and Android apps are free, and offer simple time lapse functions that can be used with the phone’s internal camera, as well as a broader range of functions for triggering external cameras. So its possible to be creating sample time lapses instantly, at no cost, if you have some system for holding your phone steady. Applications from freeware to top-end Adobe apps, can be used to composite your time lapses once shot and edited.

Image Courtesy of Triggertrap

Image Courtesy of Triggertrap

What Types of Time Lapses Can I Shoot?

Triggertrap has multiple modes, including standard, even interval time lapses, TimeWarp, which speeds up as it shoots, DistanceLapse, which shoots the same number of frames per block, mile, or kilometer, even if the conveyance carrying the camera slows down and speeds up with traffic. These three modes are available for your smartphone camera, as well as external cameras. Star Trail mode, creating night shots with the stars rotating around the north (or south) pole, and a bulb ramping mode, which is more advanced than we will cover here, are only available for external cameras.

Image Courtesy of Triggertrap

Image Courtesy of Triggertrap

Traggertrap can also be triggered by sound (great for fireworks, gametrails, etc) vibration (which could use the same examples as sound) or even facial recognition, all of which work with internal phone cameras as well as external cameras. It also offers two HDR modes, which are designed for merging multiple exposures into single images with an increased dynamic range. The HDR functions are external-camera-only.

Where Would I Go from There?

Once you are comfortable with the concepts of time lapsing, the next likely step is to get a Triggertrap Mobile Dongle and Camera Cable (total cost at the Triggertrap website of slightly over $30US) to control your real camera wirelessly from your smartphone. As well has adding new functions that only an external camera is capable of, and improved resolution and sharpness, the biggest plus to time lapsing with a DSLR is the increased light sensitivity; evening is the most exciting time for urban time lapses, and the low light sensitivity, and better handling of light sources within the image frame without halo effects, offer much improved night-shots using a DSLR instead of a phone-camera. While add-on wide-angle lenses make wide angle possible with phone-cameras, telephoto lenses on DSLRs offer a whole new range of opportunities.

Image Courtesy of Triggertrap

Image Courtesy of Triggertrap

Where Does It End?

Who says it has to end? With the beauty of landscape time lapses with clouds rushing by, rivers and roads with cars and watercraft moving through, and all the interesting activities of urban locations, the possibilities are endless. And that is before even considering the human factor. Documenting the ebb and flow of people in a subway station, or cars at traffic lights from above, are just a few suggestions. Then there are bulbs popping up and flowers bursting open and birds building nests, for macro photography lovers. Even the traffic on an anthill can be mesmerizing when time lapsed.

What are the Other Hardware Options?

If time lapsing becomes a serious interest, than advanced tools can be added that enrich the time lapse experience. The first addition is typically a dolly track. Dynamic Perception makes a series of professional track systems that allow your camera to move gracefully during a time lapse, adding dimensionality, as close-up objects move against the background during the process. 3 axis robots, such as those built by Emotimo allow an even wider range of motion, moving from the landscape to the sky vertically, or panning side to side during a time lapse. Combining both allows a camera to move past an object while swiveling to keep it in the center of view, much as a person does when walking or riding by an object of interest.

Image Courtesy of Revolve

Image Courtesy of Revolve

Tiny tracks and robots are becoming more available for low cost alternatives to this type of pro equipment, such as Glidetrack’s Mobyslyder and the Revolve Camera Dolly.

Image Courtesy of GlideTrack

Image Courtesy of Glidetrack

Editing and Color Managing Your Time Lapses

One of the advantages of time lapse work over most types of video, is that it is actually a series of still images. This means it is possible to shoot in RAW mode, and gain the advantages of better highlight and shadow control, as well as making it easy to color manage one shot, and then apply those color corrections to all the images in the series. Shooting a SpyderCube or SpyderCheckr can help you determine the settings for your time lapse work, and SpyderLensCal can assure that your focus is in exact zone desired. Back in the studio, those Cube or Checkr shots can then be used for RAW adjustment before creating your time lapse from your individual images.

Credits: C. David Tobie, Copyright 2013. Website: CDTobie.com Return to Blog’s Main Page

Color Analysis of the iPad Mini and Retina iPad Mini

Note: I am republishing this article, as it pertains equally to the new Retina Display iPad Mini, which shares similar screen color with the non-Retina version.

Characteristics of the Fourth Generation iPad with Retina Display

There are not too many surprises with the fourth generation full size iPad (wouldn’t it be great if Apple gave these products functional names?). Its largely a refresh for the sake of updating the processor and moving the connector system to the new Lightning Connector. Both worthwhile improvements, but not anything to concern us here. Still, there are more questions to be considered with the new iPad mini.

Characteristics of the iPad mini

The screen of the iPad mini offers the pixel-count of the pre-Retina iPads, in a smaller form factor. Not retina resolution, but somewhere in between the non-Retina full size iPads, and the Retina versions, by way of its decreased screen size. Many will choose to live with this enhanced, but not “Retinaed” resolution (yes, I just turned Retina into a verb) in return for the convenience of the smaller form factor and the lower price of the new mini. But what about its color characteristics for serious uses? Has Apple taken a step backwards there as well, in order to make the new mini more cost competitive in its (already populated) size range?

Earlier Testing

The answer is yes, and no. As we know from previous testing, the Retina iPad screens (and the iPhone 5 Retina screen as well) have been updated from the earlier, twisted, sub-sRGB color space of early iOS devices, to a color space very close to sRGB. The gamut plot below shows the iPad 3, with sRGB overlaid, as that order provides the clearest indication of their match.  The green primary of the iPad 3 actually exceeds sRGB by a bit, but overall this is a great match.

sRGB gamut over iPad3 gamut

We have also studied the gamut of earlier iOS devices, and seen how this gamut effects their display of web images (in sRGB) and web videos in Rec-709, which shares a number of key characteristics with sRGB. The image below is the second generation iPad, overlaid on the third generation iPad, showing the smaller and twisted gamut of the earlier screens. There is no doubt that the color accuracy of the sRGB-sized recent devices is superior to the older devices.

iPad2 gamut over iPad3 gamut

Color of the iPad mini

With that background information in mind, lets look at the gamut of the iPad mini in relation to sRGB. First, its important to note that the white point (global color tone) of the iPad mini is close to the target value of 6500K, and the gamma (ramp from black to white) is very close to the target value of Gamma 2.2. In the image below, you will recognize the earlier, sub-sRGB gamut, and twisted primaries, with the added twist of primary green, and well as primary blue, being offset sufficiently from the sRGB primaries to lie outside of sRGB, making color correction that much more difficult.

iPad mini gamut in blue, compared to sRGB in red

Conclusion

Yes, this gamut looks quite familiar, as you can see by comparing it to the previous illustrations. The iPad mini does indeed revert to the smaller, twisted shape of the earlier iOS gamut. Apple seldom takes a step backward in their relentless move forward, but here we have one example of it. So, if you were considering getting an iPad mini for use as a photo or video portfolio, please note that these color deficiencies will effect your results.

It is quite likely that in the next generation of iPad mini, Apple will move the device forward to a full sRGB gamut (and who knows, perhaps Retina resolution as well). So at this time the larger gen 3 and gen 4 iPads are the optimal iPads for display of critical color. It is possible to color calibrate the iPad mini with Datacolor’s SpyderGallery application, to produce corrected color (within the limits of the reduced gamut) in the Gallery viewer, or in other Apps if you launder your images through SpyderGallery. But for color critical uses, it may be worth holding off for a generation, to see what Apple has up its sleeve next time for the iPad mini.

Credits: C. David Tobie, Copyright 2013.   Website: CDTobie.com   Return to Blog’s Main Page

SpyderGallery: Now Supports Android Phones & Tablets

Gallery Banner

For more than a year, Datacolor has been saying that SpyderGallery for Android was under development, to provide display color calibration to Android, as well as iOS. Now it has finally been released, and is available for free from the Google Play Store. But what took so long? 

Get SpyderGallery for Android at Google Play

Variety is the Spice of Android

It turns out that the wide array of screen sizes, shapes, and resolutions; as well as the wide array of screen and backlight types available in Android, Kindle, Nook devices running Android v3.0 or higher required considerable more control than the much smaller, more consistent set of variables in iPhones and iPads. No big surprise there, but to maintain the consistency that the iOS version of SpyderGallery offers, it was necessary to develop some new techniques.

AndroidScreenShot

Android White Balance

The first of these is a new method of white balancing screens under SpyderGallery. Many Android screens run very cool, and blue. This increases their perceived brightness, but it also reduces their color accuracy across the board. Under Android SpyderGallery measures all screens for color temperature, and each image is corrected to be in the desired 6500K range when viewed in SpyderGallery. Just toggle the color management on and off to see just how far from 6500K your Android device’s native color is; for some the native color is quite close, for others, its surprisingly far off.

Android Energy Savers

Next was issues of “energy saving features”. These occur both at the Android OS level, and in the manufacturer’s software for some lines of Android devices. Its important that no type of dimming function occur while screen measurements are being taken by the Spyder, and that no type of differential adjustment is made later to the screen in the name of energy saving. Simple backlight adjustment based on ambient light should not be a problem, but other adjustment techniques can cause color accuracy issues. Solving these problems required testing of a wide range of devices, to see just which ones had such issues, and which did not.

Android Gamma Curves

And finally there proved to be quite a range of gamma settings on Android devices. Gamma describes the relative brightness of different densities in an image. Again, increasing the gamma from the standard 2.2, to something closer to 1.8 can make a screen look brighter, but at the cost of sacrificing accurate tonality on screen, and washing out colors as well as densities of midtones. SpyderGallery for Android now offers advanced gamma correction, which assures that your images densities are as they were intended, as well as avoiding washing out colors.

Android and Spyder4

The advanced nature of these measurements, and the wide array of screens in the Android Universe, meant that only the more advanced Spyder4 line of calibrators is sufficiently effective for use under Android. So while Spyder3 will continue to be supported for iOS calibration, Spyder4 (Express, Pro, or Elite) is required for Android calibration.

From the Horse’s Mouth

Long time acquaintance John Nollendorfs tested SpyderGallery for Android, and was quite enthused about the results. Here it is in his own words:

“I’ve had my Nexus 7 for nearly a year, but have been disappointed in its lack of color in the display. Using Spydergallery, the colors now match my Spyder-calibrated desktop screen perfectly! I use my Nexus 7 to show clients images from my shoots in progress. Up to now, I have to explain how the final images will have better color. With Spydergallery, the progress shots on the Nexus pad appear as good as on my desktop calibrated screen.”

Color Management under Other Mobile Applications

Now that SpyderGallery for Android has been released, the next goal will be to provide color management capabilities for other mobile applications under iOS and Android. This will mean that the same type of color improvements you now see in images viewed in the SpyderGallery viewer will also be available in third party image viewing, image editing, and presentation apps. Stay tuned, to find out which applications will offer Spyder-based color management.

Credits: C. David Tobie, Copyright 2012. Website: CDTobie.com Return to Blog’s Main Page

Adobe Angst and the Creative Cloud

Creative Cloud Image

Creative Cloud Image Courtesy of Adobe

The saying is that you can’t stop progress. And yet we aren’t always happy about it. Many of us live in towns or neighborhoods which once were quite self sufficient, with services from barbers and hairdressers to hardware and grocery stores. Now most of us have to travel miles to reach larger, more impersonal, alternatives to these long-gone local shops. That’s progress.

The digital revolution has been a main focus of progress in the last couple of decades, and many of the professions that have been “digitized” have suffered as a consequence. Unless your digitally impacted line of work is licensed and mandated, you probably earn less, and have less job security, than in the analog days.

Photographers and graphic designers have been hard hit in this manner, and at the same time that their livelihoods have been impacted, their responsibilities have mushroomed. They learned a range of new and foreign computer skills, studied everything from prepress standards to color management, and those that survived have developed some type of balance in the digital world.

The description above attempts to set the stage for the angst that has rocked the photo and graphics community since Adobe, King of imaging software, announced that it is moving virtually all of its graphics, photo, and video applications to a subscription-only model. Adobe did not do this lightly, nor without research and testing. Its not something that user response it likely to change: Adobe has seen the future, and this is their response to it.

And the plans Adobe is offering are not unreasonable, for many users. One or another option will meet the needs of many at reasonable prices, and are actually a very good deal for some. But end users have still been very troubled by the change, and have been expressing their angst in various on-line forums in quite colorful language.

I will apologize in advance for using this term, but there is really no other description for what Adobe is doing than “paradigm shift”. Its a change in the basis underlying the whole field of imaging and design. It feels, to many, as though, after years of owning their own homes, officials have knocked on the door and told them that home ownership is no longer allowed, and that they will need to pay rent in the future. Those who firmly believe in ownership, of not owing anyone anything, of buying what you can afford, as you can afford it, and making your purchase decisions carefully, may certainly find this troubling.

I have no intention of pitching the Creative Cloud options to Adobe users, not today at any rate. Nor do I plan to raise my voice in protest; I believe Adobe is sincere, serious, and probably right in what they are doing. But I did want to address the angst I am hearing in the voices of many users. Much of this is a matter of how we think about things, and if some concepts from me can ease anyone’s concerns, than I’ll consider that enough.

We have never actually owned our software. We bought it, and in some cases even the right to resell it, under a “shrink-wrap license agreement.” Software is written, like a novel, and unlike most things we buy, is covered by copyright, on the basis of it being written. The fine print in your license agreements have long told you that you are licensed to use this software under certain terms; this is not ownership in the sense that you can own a horse or a baseball bat.

Owning software has always been a cooperative venture. I have recommended against purchasing products from companies in financial difficulty, which might not be around to provide bug fixes, updates, and new versions over time. Adobe recently publicly released the source code of Photoshop 1. I downloaded a copy. Its availability underscores the fact that what Photoshop was then is not of much use now, it’s the movement forward, and all the versions in between (which I sometimes awaited in agony, when my workflow didn’t really work until a new feature or function was released) were a work in progress. “Owning” Photoshop 1, which I now own in a far more concrete manner than I did in its heyday, since the actual source code is in my hands, means nothing today, except as a curiosity or an educational tool.

The future of design and photography, more so than most other fields, depend on the prosperity of Adobe. If Adobe disappeared tomorrow, videographers would complain, as they moved up to Avid, if they could afford it, or down to Final Cut X if they could not. Non-Adobe options for graphic design, vector art, page layout, and photography are far weaker, and we would all prefer that Adobe remain in business, even if we would simultaneously hope for some competition to keep prices reasonable and features moving forward.

We are in danger of losing our best newspapers, with their failure to transition to a digital subscription system that people will accept. Adobe’s position in photo and design is bigger than any one newspaper. In fact, without InDesign, Illustrator, and Photoshop, the remaining newspapers would have quite a scramble to continue producing papers. So the financial health of Adobe, and its move forward into the type of financial model that appears to be the future for higher value applications (as opposed to low cost Apps) is important for all of us that have drives full of InDesign Layouts, Catalogs of Lightroom adjusted RAW files, Layered Photoshop files, Adobe Postscript fonts, and Illustrator images.

Credits: C. David Tobie, Copyright 2012. Website: CDTobie.com Return to Blog’s Main Page

FocusTwist: Focus-Controllable Images with the iPhone

FocusTwist Logo (copyright, FocusTwist)

Logo Courtesy of Arqball

When you think of controlling the focus of images after the fact, you probably think of the Lytro camera; a clever little device (one is tempted to say prototype) that shows us one way of gaining different info from a shot, instead of spending all our pixels on increased resolution. But now there is an iPhone/iPad app (I’m tempted to think of this as a prototype as well) which allows you to perform a similar trick with your phone photos.

With Arqball’s FocusTwist  app, its time, not resolution reduction, that is used to produce the multiple images. Hold your iPhone still, tap on the foreground element on screen to start the focus process, and in a couple of seconds the FocusTwist app will have captured multiple images with different focal planes; starting with the foreground element you selected. Take a look at this example, which I shot with FocusTwist to include in this article.

FocusTwist Image Example

The process is something of a gimmick, in that the resulting photo can’t be used as a standard image, since there is no current format for “multiple focal plane images”. The other “gimmicky” factor is FocusTwist’s expectation that the foreground object be three to five inches from the lens. This is a great range for macro shots with recent iPhone models, and it shows off the focal plane change function clearly. But it also makes all photos taken with FocusTwist rather similar. The term “meme” comes to mind.

But there are other issues than the “one trick pony” aspect of the application. Note that, while the iPhone was carefully placed and oriented for several seconds before the shot was taken, that FocusTwist failed to orient the image correctly; it appears to be a one-orientation pony as well. And if one wished to adjust the exposure or other factors of the image, say to lighten the tub handles in the foreground? Since this is not a standard image it cannot be edited in a standard image editor, so rotation, lightening, cropping, or other adjustments are not possible. Please recall my “prototype” comment above.

Will FocusTwist images make the rounds as the next phone photo fad? Will such images be passé in a few months? Or are the capabilities of this App perhaps a bit deeper than the directions and marketing video imply? One further dimension that immediately comes to mind is time: it would be possible to capture boughs waving in the breeze, cars moving on the road, or a dancer spinning on the floor in the multiple frames of a FocusTwist image; particularly if Arqball chose to extend the App’s capture capabilities.

How much further will Arqball move with features and functionality? Will they add an option to render out to video, or animated GIF, so that the results of their app can be widely used, instead of trapped in the snowglobe of their own application and website? Will they see this as a beginning of a category, or a parlor trick? Only time will tell…

Credits: C. David Tobie, Copyright 2013. Website: CDTobie.com Return to Blog’s Main Page