Over the last decade, nearly every professional in the media and entertainment business has developed an opinion on the pros and cons of shooting camera raw.
Frame.io pricing starts at $15.00 per feature, per month. There is a free version. Frame.io does not offer a free trial. See additional pricing details below. Not the ARK.io Team, not the ARK Public Network Delegates, and definitely not a centralized exchange. With the power of built-in Ledger hardware wallet integration, never worry about the security of your crypto again. Available on All Major Operating Systems. Latest Release: 2.9.5. Released: 10 Mar 2021.
Some are unrelenting supporters of raw, while others protest that it’s just another buzzword.
- Shopping.io (SPI) is currently ranked as the #347 cryptocurrency by market cap. Today it reached a high of $115.74, and now sits at $87.54. Shopping.io (SPI) price is down 15.90% in the last 24 hours. Shopping.io is currently trading on 3 exchanges, with a 24hr trade volume of $3,073,482.
- CVR Price Live Data. The live COVIR.IO price today is $103.92 USD with a 24-hour trading volume of $64,035,716 USD. COVIR.IO is up 4.62% in the last 24 hours. The current CoinMarketCap ranking is #1002, with a live market cap of $12,377,743 USD. It has a circulating supply of 119,108 CVR coins and a max. Supply of 8,000,000 CVR coins.
Whatever your position, you might think that after so many years of debate the industry would have collectively agreed on an answer by now. But when a disagreement goes on for this long without easily quantifiable or definitive answers, it probably means the topic is a lot more complex than it appears.
I’ve spent 15 years in Hollywood’s post-production and production workflow scene. Even after all that time, I’ve found that the topic of raw image capture is still tangled up in misinformation, misunderstandings, and mystification.
Frame Io Price List
So in today’s article, I’m examining three common perspectives about raw image capture, in an attempt to clarify a few misconceptions and dispel a few myths.
But before we dive in, in the spirit of full disclosure, I’ll admit that I’m a strong supporter of raw image capture and raw workflow
In fact, I founded one of the first post houses to deploy a true end-to-end raw workflow back in 2007. Over the years, my team and I performed budgetary comparisons across hundreds of projects, and while we believe raw is always the ideal, it doesn’t mean that we blindly encouraged everyone to use it.
All that to say, in order for us to ask the right questions about raw workflows, we need to first recognize that there isn’t always a single answer. One of the most important aspects of finding the “best” answer is knowing when it applies and when it does not.
I hope the following breakdown will help give you better clarity about when raw image capture is an actual, qualitative benefit versus when it is merely a buzzword.
When it comes to technological progress, what was once true may not continue to be true forever. This is especially important when it comes to raw image capture because we all know raw files are so much larger and costlier than RGB…or are they?
Simply put, not all raw files are created equal. The way in which raw data is encoded will have a dramatic effect on the cost to archive original camera files (OCF). Of course, the particular camera, codec, settings, and many other factors influence how that raw data is encoded, and therefore, how big it is.
RED, SONY, BlackMagic, Apple, and Canon all use compressed raw, while Arri and Panasonic use uncompressed raw. And remember, individual resolution settings in each of these cameras independently influences other factors.
So to get a good idea of the differences, here’s a chart to compare the size of one hour of the highest quality raw recording these formats can muster (in their respective versions of 4K/UHD).
With all resolutions at 4K, uncompressed raw cameras top 1 terabyte per hour, while compressed raw cameras hover around half a terabyte per hour.
But what about when each camera is set to even higher resolutions? If we factor in the actual full resolution for each of these systems, things look a little different.
In the chart below, we can see that the delta between raw recording formats of the most popular professional cameras today isn’t that significant—with the highest quality cameras and their best settings generally shooting approximately 1 terabyte per hour.
Obviously, this snapshot doesn’t represent every camera format on the market, but when it comes to the discussion of cost, the difference between shooting ARRIRAW 4K, RED 8K, and SONY 6K is less than 10 percent. Between these cameras, the difference in storage cost is only marginal.
How is it these cameras all manage to land so closely on storage requirements, when they are shooting raw so differently? Engineers have worked hard to make their cameras competitive. If their camera required a level of storage beyond what their target customers could reasonably bring to their workflow, they wouldn’t sell very many because the costs associated with it would be too high.
Frame Io Price Guide
So let’s debunk the myth that data from higher resolution cameras must be vastly more expensive than lower resolution cameras. As the real-world data shows, this is not inherently true.
In fact, because an LTO 8 tape costs approximately $150, the difference in cost between the three raw formats in the example amounts to less than the cost of the LTO tape itself. That’s an easily quantifiable cost when designing a workflow, and shouldn’t be a stumbling block.
When it comes to shooting common high quality raw formats, the market spread is actually much narrower than people generally assume. In addition, compressed raw cameras offer options that allow users to increase mild compression to shrink file sizes and further reduce archive costs.
However, the biggest problem with charts like these is that they influence people to make decisions that are based on only one factor. So be warned. Giving the raw data rate too much weight in your decision can potentially dictate a workflow that might be less than ideal for a given project.
For example, if you want to use a preferred camera, but see a chart like this, you might assume that camera’s required cost (from the higher raw data rate) is not viable for your project. That’s why when quality counts, it might be a better bet not to shoot raw.
Since 2015, the differences between raw and RGB have continued to become smaller and smaller. In fact, when it comes to non-HDR applications, I no longer believe it is essential to always shoot raw in order to experience the same visually-tangible and qualitative benefits that only raw provided for so many years. This is because RGB formats like ProRes have added 12-bit encoding support and much milder compression than in the past.
For example, ProRes XQ is a 4:4:4, 12-bit codec with a very efficient compression scheme. We all know it looks good, but is ProRes XQ a viable substitute if you can’t justify the storage expense of shooting raw?
A useful rule of thumb for me is as follows:
Shooting ProResXQ when mastering in same-as-source raster sizes is so similar to the raw equivalent that the cost advantage is more significant than the qualitative trade off.
With resolution, dynamic range, color, and bit depth all visually identical, only a slight change in compression becomes a factor for most camera configurations. Because ProRes XQ is only a modest 4.5:1 compression, the difference is not only negligible, there are cases where the slight denoising and softening of mild compression can be an advantage.
You could argue that this wasn’t always the case, but as far as recent professional technology goes, an image generated by today’s top CMOS sensors and captured in a camera doesn’t necessarily mean that full dynamic range is only available to the raw recording.
In other words, camera engineers do everything they can to preserve the same linear data fed into the raw stream as they do in a log-encoded RGB stream (or YCbCr stream) like ProRes.
I was personally obsessive about this when it came to the Panavision DXL camera, because I am a huge driver of proxy workflow, and that requires the dynamic range of ProRes to match the dynamic range of the raw in SDR configurations.
Frame Io Price Free
After numerous discussions, spreadsheets, and analyses over the years, I can’t universally decree that shooting raw saves money in lighting or provides post with more latitude and freedom in exposure, visual effects, or color.
When a camera encodes a debayered image (with or without compression), the log processing of that image has the ability to retain the source bit depth. So even without raw, you can retain much of the detail you need for creative and technical adjustments in post.
For example, in Alexa, the signal processing to ARRIRAW is 12 bits per pixel. Similarly, the signal processing to the Alexa ProRes is 12 bits per pixel (when encoding in XQ or 4444). Because the 12-bit linear source is the same for both formats, coupled with the fact that Arri delivers a log-encoded raw, the same dynamic range is maintained in LogC for both the raw and RGB ProRes (XQ).
A similar principle applies to color. In the case of shooting the same resolution (e.g. 3.2K), the difference between raw and ProRes (though extremely subtle) is in the difference in compression—not in the dynamic range or color.
Therefore, there’s no demonstrably significant advantage in lighting time, color correction time, or image range/flexibility for visual effects. While there are situations where a computer could discern a difference during advanced compositing or keying, it’s limited to more extreme cases.
In the case of Arri, prior to the introduction of ProRes XQ and of 3.2K ProRes recording, there was a more significant difference between ARRIRAW and ProRes. Today, that gap is virtually closed, thanks to the advancements of excellent engineering. This paves the way for shooting ProRes to become more financially beneficial, without sacrificing quality.
So why do so many people insist on shooting raw if the difference is fairly minimal? I suspect the answer is that they remember when the delta between raw image capture and RGB used to be far more significant, when RGB images like ProRes had greater compression ratios, lower bit rates, and were limited to 1080/2K resolution.
Ironically, resolution used to be a significant factor because when RGB streams were limited to 1080p, they were almost always scaled. In-camera scaling was notoriously poor quality, especially since many formats did not scale with whole numbers, creating aliasing and artefacting in 1080p record streams. In-camera scaling also had to be done at high speed, another tradeoff that put an additional dent in quality.
However, given the improvements over the past four years, in-camera RGB streams now tend to measure up to raw record rasters (up to 4K) and can be captured at 1:1 pixel, which significantly reduces in-camera aliasing and scaling artiefacts.
This means that although the technological gap between raw and RGB has not been fully eliminated, it has continued to shrink—and it’s a good reason why ProRes is becoming the preferred method for the overwhelming majority to shoot with, driven by the balance between quality and cost
So when is raw the best solution?
In my opinion, it’s actually fairly simple: when you can afford the absolute highest quality, why not? If archive costs and download times aren’t significant driving factors, you can never go wrong by capturing in raw.
Raw also includes other benefits such as better handling of noise, edge detail, and out-of-gamut colors, with the headroom for improvement, as future debayering technologies are often backwards compatible.
For DI facilities willing to adopt advanced raw DI workflows, there are significant advantages to working with raw. One practical advantage is that changing white balance and tint to normalize a desired white point will produce fewer color artefacts than using color controls like slope, power, offset or lift, gamma, or gain to achieve the same white point.
But when cost is a major factor and raw isn’t in the budget, exploring RGB options can help you use the camera you want and get the quality you need. I observed this firsthand when I noticed RED Monstro and Panavision DXL become popular episodic cameras in 2019.
Even though most of the episodic world only required 4K or HD mastering, many DPs wanted to shoot large format to take advantage of enhancements in depth of field, field of view, and low-light performance. Given the higher cost of storing 8K raw, many creatives started shooting in ProRes mode, intentionally disabling 8K raw.
This gave DPs virtually identical dynamic range, color flexibility, and large format benefits as shooting raw, but the ProRes data was between 50- and 75 percent smaller than the raw data utilizing the same image area. Using RGB in this way allowed productions to leverage the benefits of 8K raw large format in an HD ProRes workflow, making it identical to the familiar Alexa 2K workflows that many episodic productions prefer.
Even if you only plan on mastering in HD, large-format sensors and lenses give you unique optical characteristics that carry through to a low-resolution HD master.
Qsc Io Frame Price
So is raw a qualitative benefit or an overhyped buzzword? From my experience, the answer is yes to both.
The biggest problem as I see is when you don’t know when shooting raw improves or complicates your production workflow—and how to assess its advantages or disadvantages.
It’s also important to resist the desire to bring the resolution argument into the raw discussion because, as I’ve demonstrated, they’re not mutually exclusive. If anything, in many cameras, a 4K ProResXQ file can actually be larger than a 4K raw file.
But the limit to RGB begins to hit its threshold when it comes to HDR mastering. The bare minimum for high dynamic range to be effective is 12-bit files, so when HDR is the target for mastering, with RGB files already working at their limit, raw files with more efficient compression and higher data rates and bit depths may have more data to work with that can be assigned to a larger, more malleable set of code values.
Keeping in mind that technology is ever-evolving, on the heels of ProRes limits for HDR application and future-proofing, Apple released ProResRAW, a new codec with ultra-high efficiency compression and optimization. As the adoption of ProResRAW trickles throughout the market over the next three years, it is likely that the common usage of today’s RGB ProRes is reaching its peak.
In fact, the idea of a new resolution-independent codec that opens up possibilities for compressed raw is ideal. Manufacturers of post-production tools like the idea of codecs not being driven independently by camera companies because it’s more challenging to keep up. ProRes is the most important professional camera codec in the history of digital cinema. ProResRAW is the perfect blend of a trusted codec platform coupled with the advantage of raw.
Perhaps in the future we’ll have one solution for every camera to shoot and edit instead of having to debate which flavor is good, fast, or cheap. But for now, we need to remember that every workflow is different, and that means we always need to check our assumptions to arrive at the best solution.
Last year, Amazon introduced its smart glasses product, the Echo Frames. They’re still a bit of a work in progress and you can only get them by requesting an invite, which Amazon has to approve. My invite was recently accepted so after we spent the $179 preview price (full retail will be $249 when these are generally available), I received the Echo Frames and I’ve been wearing them for the past two weeks.
Are they worth the price tag? That’s difficult to say but I think for most people, no, they’re not. Let me tell you why after explaining what they are and what they do.
Amazon Echo Frames are a much simpler take than Google Glass, the first, and very expensive at $1,500, smart glasses I’ve ever worn. Google Glass has a camera and a heads-up display. The Amazon Echo Frames do not.
Instead, these are standard-looking glasses frames with a thicker-than-normal temple (the side of the frames that rest on your ear).
Those are where the internals reside: The battery, small processing chip, Bluetooth radio, two microphones, and four speakers. Even with those components inside the frames, you’d be hard-pressed to see these as anything but regular glasses if you saw someone wearing them. The Echo Frames come with clear lenses, by the way. You can have your prescription lens fitted to the frames but of course, that’s another out-of-pocket expense.
One of the side arms is a touch and swipe sensor and there’s also a small button for power as well as a rocker arm for volume adjustments. Again, you wouldn’t see them if you didn’t know they were there.
The Echo Frames work exactly as advertised, are well-designed and the setup is drop-dead simple, taking all of a minute through the Alexa app.
When paired with your phone and the Alexa app on Bluetooth, they bring Alexa to your face. They’re always listening for the “Alexa” command, which works well, and you can mute your microphones when needed. In my testing, Echo Frames got the vast majority of my voice commands correctly.
The two pairs of speakers are aimed at your ears and it’s easy to hear Alexa when you ask for her help. You can also listen to music over Bluetooth from your phone and it sounds OK. Since the small speakers aren’t in your ears, the sound quality is marginal at best; don’t expect much bass, for example. On the plus side, there aren’t any speakers actually in your ear so you can hear the world around you while enjoying a little music.
Since Echo Frames are connected to your phone, you can take or dismiss phone calls, just like any other Bluetooth headset, with the touch-sensitive gesture area. People I called said the voice quality was very good although one person noted a little wind noise when I was outside on a call. For Android users, Echo Frames can also read your phone notifications allowed; that’s a feature I couldn’t test because it’s not available for iPhone yet. You can also set up a VIP list to only allow certain notifications to be accepted by the frames.
One feature that is on both Android and iOS is texting by voice. It works well from a text translation process but sends an audio file of the text to your contact as well as the message you sent. That’s odd and I hope Amazon drops the audio file feature or lets you turn it off, in a future update.
Aside from using Echo Frames for calls, texts, and various questions for information, I also tested them as a smart home voice controller with a few of my devices still connected to my Amazon Echo. The Frames worked no differently from my Echo in this case, which is a good thing. And if you prefer to use the native voice assistant on your phone, such as Siri or Google Assistant, you can do that with a button press on the frames. You understandably can’t have Echo Frames always be listening for them, however.
So why am I not too keen on the Echo Frames? There are two main reasons.
First, the battery life is a bit lacking, which confounds me since I have smaller smartwatches that can get through two full days of use. Amazon says you can expect about “14 hours of battery life in mixed usage of 40 Alexa interactions, 45 minutes of music, podcast or other audio playback, 20 minutes of phone calls, and 90 incoming notifications”.
I didn’t try to replicate these figures exactly but they sound accurate based on my tests. And I was able to listen to music for nearly 3.5 hours on a full charge, which exceeds Amazon’s expectation of three hours. But I never made it to 14 hours with mixed usage. And the battery charger is a proprietary magnetic connector, which I’m not a fan of.
Second, even with Echo Frames, you still need your phone with you at all times to use Alexa. Paying $250 just to move Alexa from the phone in your hand to the glasses on your face is a big ask to me.
I’d feel differently if they could do more, such as track some health data, for example.
You could actually get that feature, along with Alexa, sleep tracking, and up to 6 days of battery life with the Fitbit Versa 2, which costs $199. If you only want hands-free Alexa features, you can already get them in some Bluetooth headsets such as Amazon’s own Echo buds for $129, or the $179 Jabra Elite Active 75t. The key difference is that you’re likely to wear Echo Frames all-day; something you might not want to do with a Bluetooth headset.
What Is Frame Io
I understand that Amazon is going for simplicity here. And they’ve done a good job in creating a product that provides that simplicity. However, I struggle to recommend such a limited feature set for the price.
These aren’t a bad product and if you’re all in on Amazon for your smart home or your digital assistant, you’d probably like them. But this isn’t a groundbreaking wearable device for $250 when your handset or another wearable with more features can fill the same need. And if you have prescription glasses, don’t forget to add the cost of your lenses.
They’re a pass for me at this price and I think for most other people as well unless you just have to have an easier way to access Alexa all day long. If everyone could buy them for $179, that would be a better value. Even then though, I’d say they’re just a bit overpriced for what they deliver, and what they don’t. There are just too many other devices at similar price points that add more functionality for less cost.