When a young Eva Kollisch arrives as a refugee in New York in 1940, she finds a community among socialists who share her values and idealism. She soon discovers ‘the cause’ isn’t as idyllic as it seems. Little does she know this is the beginning of a lifelong commitment to activism and her determination to create radical change in ways that include belonging, love and one's full self. In addition to Eva Kollisch’s memoirs Girl in Movement (2000) and The Ground Under My Feet (2014), LBI’s collections include an oral history interview with Eva conducted in 2014 and the papers of Eva’s mother, poet Margarete Kolllisch, which document Eva’s childhood experience on the Kindertransport. Learn more at www.lbi.org/kollisch . Exile is a production of the Leo Baeck Institute , New York | Berlin and Antica Productions . It’s narrated by Mandy Patinkin. Executive Producers include Katrina Onstad, Stuart Coxe, and Bernie Blum. Senior Producer is Debbie Pacheco. Associate Producers are Hailey Choi and Emily Morantz. Research and translation by Isabella Kempf. Sound design and audio mix by Philip Wilson, with help from Cameron McIver. Theme music by Oliver Wickham. Voice acting by Natalia Bushnik. Special thanks to the Kollisch family for the use of Eva’s two memoirs, “Girl in Movement” and “The Ground Under My Feet”, the Sophia Smith Collection at Smith College and their “Voices of Feminism Oral History Project”, and Soundtrack New York.…
Michael Kammes answers 5 THINGS about technologies and workflows in the media creation space. Production, post production, reviews, gear, tech, and some pop culture snark.
Michael Kammes answers 5 THINGS about technologies and workflows in the media creation space. Production, post production, reviews, gear, tech, and some pop culture snark.
Exactly what video editing software does Hollywood use for film and television? I have an update to our most popular episode from seven long years ago, and today we’ll reexamine the landscape and see how Hollywood has adapted since then, including a change from the big three solutions. The original “The Truth About Video Editing Software” in 2017 got some attention. Many years ago, I dissected the video editing landscape here in Hollywood and told you what most editors were using. But most importantly, why? Now, before we look at what’s changed, I do encourage you to check out the original The Truth About Video Editing Software in Hollywood from 2017, to see how we got to where we were…then. Since 2017, desktop software has improved. New tools have now entered the market, and creating content on your phone now includes editing. There are just so many options out there, so I’ve reached out to various folks in the industry, both on the business and creative side. Some have been under FrieNDA (which is my favorite type of agreement) and some have been under the cloak of night in empty parking garages here in Southern California. But all have granular insight into the industry and where we are today. 1. Avid Media Composer When we last left Avid in 2017, they were the undisputed software choice for film and TV picture editing here in Hollywood. Media Composer , their flagship product, had multiple versions for beginners, professionals, and enterprise users. And as a side note, they dominated the post-audio market with Pro Tools . Avid even had their own branded hardware . Now you’ve got to consider the singularly unique place Avid was during 2017. Media Composer looked…well, let’s face it, dated. It was designed using then-decades-old iconography and design , which was legacy from the days of helping film editors transition from analog editing to digital. Because most film and TV post-production pipelines within post facilities relied on Avid-centric workflows, updating the interface and workflows to attract new users and to grow Avid’s business heavily risked alienating existing users. As my grandmother used to say, “You’re damned if you do and you’re damned if you don’t.” However, in 2019, Avid completed a new look for Media Composer , although they eventually acquiesced and offered the “Classic” look several years later. Avid has also since introduced UME , their new “Universal Media Engine,” which replaces AMA and thus successfully dodges another unfortunate product name (*cough* Avid ISIS *cough*) . After 30 years, Media Composer and Pro Tools finally have direct interoperability. And after 30 years, Avid finally made the round trip workflow from Avid Media Composer to Avid Pro Tools and back easy with a direct session import and export . No AAFs needed! When the pandemic hit, the industry had to quickly adapt while working remotely, and while Avid had existing products for these distributed teams, they were either expensive or better suited for broadcast news, or they were still in beta like Avid’s cloud editing product, Edit on Demand , which was both expensive and in beta. Despite these product roadblocks for remote work, most productions stuck with good old-fashioned Media Composer. Work-from-home editors remoted into existing Avid workstations back at their facilities, which maintained environment familiarity and access to the usual shared storage. Avid rental facilities set up racks of systems to keep post-production moving. Alternative editing solutions to Media Composer didn’t offer huge cost savings for film and TV productions, whether it be doing everything in the cloud or by switching to another editing software platform. But by and large, if you cut with Media Composer, you stuck with Media Composer. A review of the Academy Award nominees and winners from 2017 until now also shows that a majority of the nominees and winners were cut with Avid Media Composer. At a business level, Avid reported 15,000 to 20,000 Media Composer Cloud subscriptions in late 2017 and into early 2018. Now, that may seem low. I mean, how can you dominate one industry with only 20,000 seats of software? But keep in mind, seven years ago, not every Media Composer user had a cloud subscription. Many users still had legacy licenses, which were not yet counted within cloud subscriptions. Now, if we fast forward to late 2023, Avid reported over 150,000 cloud subscriptions for Media Composer prior to their acquisition by the private equity firm Symphony Technology Group . It’s pretty clear Avid remains the standard for film and TV editing in 2024. Now, what will happen now that private equity owns Avid? Well, that’s a subject for another video. 2. Adobe Premiere Pro Premiere Pro has been part of the Adobe Creative Cloud package since its inception, giving Adobe a significant advantage in gaining users due to the existing widespread use of other Adobe apps like Photoshop and Illustrator . I mean, Premiere is right there to download and use! In 2017, Adobe also had an industry misstep to continue to capitalize on when Final Cut Pro Classic was killed off years prior. Facilities and editors had to find a new video editing solution that was ready to go and could handle workflows for film and television. And let’s face it. Final Cut Pro X wasn’t ready at the time. Now you could have moved to Media Composer, but many saw this as a nonstarter. I mean, they had already opted to go with Final Cut Pro Classic over Media Composer years prior anyway, and that decision wasn’t going to change. At the time, that really only left Premiere Pro. And so in 2017, Premiere Pro was second, albeit distant, as a choice for film and TV editing. Since 2017, however, Adobe has continued to enhance its Productions workflow , which provides Premiere Pro editors with a collaborative experience similar to Avid’s Bin and Project Sharing, which is a mainstay in collaborative film and TV post-production. And this additional feature did gain Adobe some high-profile film and TV projects. The first Deadpool , Terminator: Dark Fate , and David Fincher’s projects like Gone Girl and his HBO mini-series Mindhunter were all edited with Premiere Pro, as was last year’s Best Picture Oscar winner Everything Everywhere All At Once . Now, if we look at Hollywood-adjacent productions like those at the Sundance Film Festival , we’ve seen Premiere Pro being used on a greater number of films over the years, and it’s now accounting for over half of all projects showcased at the festival . And even more projects use Adobe if you include other Adobe tools like Frame.io . However, the fact remains that Premiere Pro in Hollywood has remained more unique than the standard. The general consensus is that Premiere Pro is still easily the main alternative to Avid Media Composer for film and TV work. But is that a bad thing? I mean, it’s important to remember that the market is relatively small compared to other, more lucrative media-rich markets, plus social media, where video content is produced at an exponentially faster rate than film and television. In fact, recent studies show that a majority of young Americans aspire to be social media influencers over most any other career. Film and TV are no longer the only media vehicles for visibility, let alone creative work. So for Adobe, why continue to focus heavily on a market that is no longer the main avenue that many creatives aspire to work in? I think I’d rather take the bag of money from a faster-growing market over a case study and my logo in the credits of the film. Adobe doesn’t release much in the way of metrics for app usage. However, Adobe did report that it had approximately 12 million Creative Cloud subscriptions in 2017 , and the subscriber count has grown to over 33 million in 202 3. This year, however, Adobe has faced many challenges, including poor communication about how their AI is trained and emerging legal issues with the DOJ and FTC over early subscription termination fees and a complex cancellation process. As of now, it’s unclear how this will affect Adobe moving forward. 3. Apple Final Cut Pro Those of us that were at the 2011 NAB SuperMeet knew we were seeing something special during Apple’s surprise unveiling of Final Cut Pro X , but little did we know, the fallout from this single announcement would have permanent, continuous repercussions for Apple editing software in Hollywood. Now, the Final Cut Pro X launch missteps have been chronicled extensively elsewhere , so I don’t need to rehash them ad nauseum. But I will mention a few key points as they’re important for context to understand where we are today. Higher-profile film and TV post-production requires a solid tech stack and software interoperability. This includes multiple hardware components as well as a trained staff to maintain it. Often, complete turnkey systems for post-production are sold by what we call Systems Integrators or VARs ( Value Added Resellers ). Post facilities often buy from VARs because pricing is generally better, and the VAR can build, test, deploy, and support these solutions. Putting the “value” in “Value Added Reseller” Unlike Final Cut Pro Classic, Final Cut Pro X did not go through the usual sales channels. It was an App Store purchase. This meant that the VARs’ role was largely mitigated and thus professional adoption was slowed. It would take Apple several years to get Final Cut Pro X to a level where it could be used in professional film and TV workflows. We had a perfect storm of factors that had most professional editors, and especially facilities, simply not interested in moving forward with Apple editing software. And as I said in 2017, it’s these hurdles that made Final Cut Pro X a distant third in Hollywood film and TV editing. And it hasn’t gotten any better. In 2022, the editing community created a petition asking Apple to publicly stand by the use of Final Cut Pro for TV and film industries worldwide. Now, to their credit, Apple did respond , and they committed to the joint development of training and certification courses, plus creating an industry advisory panel and an increase in Hollywood-centric workshops for film and TV editors. So while Final Cut Pro X has matured to be a professional editing solution and has been used on a handful of high-profile projects, the window of opportunity to ascend or even eclipse the level of its predecessor has unfortunately closed. However, as a slight silver lining, Apple hardware was and still is a mainstay in the professional content creation industry, and I bet many of you have an iPhone instead of an Android. 4. Blackmagic Design DaVinci Resolve When I last covered DaVinci Resolve , many of you predicted it would become Hollywood’s next “do all” tool. And you were…half right. Resolve has had significant updates since Blackmagic acquired it, including adding Fusion for VFX and Fairlight for post-audio, which creates a very comprehensive suite of tools. Resolve’s speed makes it excellent for transcoding, and in some specialized cases, Fusion can replace expensive legacy VFX systems. But it’s Resolve’s color tools that have now become the industry standard for color grading. And it’s not just because of their intricate controls, affordability, and overall innovations, but because the existing color tool sets in Hollywood ten years ago were, let’s face it, lacking. Avid’s advanced color grading tool, Symphony , had very few feature updates at the time and required higher monthly costs. Lumetri , the Premiere Pro color tool, was also beginning to show its age following Adobe’s acquisition of SpeedGrade in 2011. Plus, the other color solutions on the market were either expensive or existed outside the video editing software that most creatives were familiar with, so there was a major deficit in accessible color tools, but not in creative video editing tools. Avid and Adobe’s editorial tools had decades of nuanced development and were already affordable for most. Resolve’s video editing features were still evolving at the time. The flip side, however, is that even if Blackmagic did introduce revolutionary video editing tools within Resolve at the time, they still would have faced massive resistance from a Hollywood industry still wary of wholesale video editing changes following the failure of Final Cut Pro X to succeed. There simply wasn’t enough incentive for Hollywood to switch to Resolve for professional video editing. But that’s not the full story. The free version of Resolve, or the one-time payment for Studio , is a no-brainer compared to continual, eternal paid subscriptions from other leaders like Adobe and Avid. Bundling the more powerful Resolve Studio with a Blackmagic camera purchase was also a fantastic marketing move. Resolve offers incredible value, especially for new editors, and it’s a well-established business tactic that getting buy-in from the younger, next generation of creatives is a fantastic way to boost sales down the road. While the details on exact download numbers and usage are not publicized, in 2019, Blackmagic CEO Grant Petty did say that Resolve downloads topped 2 million , and I can’t see any reason why it still wouldn’t be growing. As an interesting side note, however, there’s a vast difference between the number of free users of Resolve and the number of paid users of Resolve Studio. The general consensus is that most professional film and TV creatives are going to need the features found in the paid Studio version, but most industry analysts estimate that only 5% or so of Resolve users actually pay for the Resolve Studio upgrade. So those professional creatives, at least those in Hollywood, are overwhelmingly using Resolve for color work and other side tasks, but they’re not using it as their primary video editor. With Avid and Adobe taking the top spots as the creative editorial choices here in Hollywood, Resolve as a video editor faces steep competition, not unlike the smartphone market, with Apple and Android dominating the marketplace with no real challengers. So, however distant, that third place for video editing software in Hollywood that was previously occupied by Final Cut Pro can now easily be filled with the logo from Blackmagic Design. But like Adobe, this does beg the question: “Are the vanity bragging rights for Hollywood’s editors worth the windfall of cash to be found by capturing other markets?” 5. Everything Else As is in life, there are no absolutes, and there will always be exceptions and outliers. Parasite won Best Picture in 2019, and it used Final Cut Pro Classic software that had been dead for eight years. The legendary Thelma Schoonmaker has used Lightworks for several of Scorsese’s movies , including Killers of the Flower Moon , which had ten Oscar nominations in 2023. If anything, these two footnotes should reinforce that it’s not the tool, it’s the talent. Knowing the right tool makes you employable. Your talent is what keeps you employed. So what do you think we’ll see in the next seven years? Will predominantly mobile solutions like CapCut or open-source solutions make their way into the professional film and TV market? Or do you have a hot take on AI in Hollywood? I look forward to your thoughts. Until the next episode, Learn more, do more. Thanks for watching.…
Generative A.I. is the big sexy right now. Specifically creating audio, stills, and videos using AI. What’s often overlooked, however, is how useful Analytical A.I. is. In the context of video analysis, it would involve facial or location recognition, logo detection, sentiment analysis, and speech-to-text, just to name a few. And those analytical tools are what we’ll focus on today. We’ll start with small tools and then go big with team and facility tools. But I will sneak in some Generative AI here and there for you to play with. 1. StoryToolkitAI StoryToolKitAI can transcribe audio and index video Welcome to the forefront of post-production evolution with StoryToolKitAI , a brainchild of Octavian Mots. And with an epic name like Octavian, you’d expect something grand. He understood the assignment. StoryToolKitAI transforms how you interact with your own local media. Sure, it handles the tasks we’ve come to expect from A.I. tools that work with media like speech-to-text transcription. But it also leverages zero-shot learning for your media. “What’s zero shot learning?” you ask. Imagine an AI that can understand and execute tasks that it was never explicitly trained for. StoryToolKitAI with zero-shot learning is like trying to play charades. It somehow gets things right the first time. While I’m still standing there making random gestures, hoping someone will figure out that I’m trying to act out Jurassic Park and not just practicing my T-Rex impersonation for Halloween. How Zero-Shot Learning Works. Via Modular.ai. Powered by the goodness of GPT, StoryToolKitAI isn’t just a tool. It’s a conversational partner. You can use it to ask detailed questions about your index content, just like you would talk with chat GPT. And for you DaVinci Resolve users out there, StoryToolkitAI integrates with Resolve Studio . However, remember Resolve Studio is the paid version, not the free one. Diving Even Nerdier: StoryToolkit AI employs various open-source technologies, including the RN50x4 CLIP model for zero-shot recognition. One of my favorite aspects of StoryToolkit is that it runs locally. You get privacy, and with ongoing development, the future holds endless possibilities. Now, imagine wanting newer or even bespoke analytical A.I. models tailored for your specific clients or projects. The power to choose and customize A.I. models. Well, who does it like playing God with a bunch of zeros and ones, right? Lastly, StoryToolKitAI is passionately open-source. Octavian is committed to keeping this project accessible and free for everyone. To this end, you can visit their Patreon page to support ongoing development efforts (I do!) On a larger scale, and on a personal note, I believe the architecture here is a blueprint for how things should be done in the future. That is, media processing should be done by an A.I. model – with its transparent practices – of your choosing and can process media independently of your creative software. A potential architecture for AI implementation for Creatives Or better yet, tie this into a video editing software’s plug-in structure, and then you have a complete media analysis tool that’s local and using the model that you choose. 2. Twelve Labs Have you ever heard of Twelve Labs ? Don’t confuse Twelve Labs with Eleven Labs , The A.I. Voice synthesis company. Twelve Labs is another interesting solution that I think is poised to blow up… or at least be acquired. While many analytical A.I. indexing solutions search for content based on literal keywords, what if you could perform a semantic search ? That is, using a search engine that understands words from the searcher’s intent and their search context. This type of search is intended to improve the quality of search results, Let’s say here in the U.S., we wanted to search for the term “knob”. Other English speakers may be searching for something completely different. That may not actually be the best way to illustrate this. Let’s try something different. “And in order to do this, you would need to be able to understand video the way a human understands video. What we mean by that is not only considering the visual aspect of video and the audio component but the relationship between those two and how it evolves over time, because context matters the most. ” Travis Couture Founding Solutions Architect Twelve Labs Right now, Twelve Labs has a free plan which is hosted by Twelve Labs. It’s fairly generous. However, if you want to take deeper advantage of their platform, they also have a developer plan. Twelve Labs tech can be used for tasks like ad insertion or even content moderation like figuring out which videos featuring running water depict natural scenes like rivers and waterfalls or manmade objects like faucets and showers. Twelve Labs use cases With no insider info, I’d wager that Twelve Labs will be acquired as the tech is too good not to be rolled into a more complete platform. 3. Curio Anywhere During the Production of this episode, our friends at Wasabi have acquired the Curio product . Next up, we have Curio Anywhere by GrayMeta , and it’s one of the first complete analytical A.I. solutions for media facilities. Now, this isn’t just a tagging tool. It’s a pioneering approach to using A.I. for indexing and tagging your content, using their localized models, and if you so choose, cloud models, too. In addition to all of this, there’s also a twist. See, traditionally analytical A.I. generated metadata can drown you in data and options and choices, overloading and overwhelming you. GrayMeta’s answer is a user-friendly interface that simplifies the search and audition process right in your web browser. This means briefly being able to see what types of search results are present in your library across all of the models. Is it a spoken word? Maybe it’s the face of someone you’re looking for. Perhaps it’s a logo. And for you Adobe Premiere users out there, you can access all of these features right within Premiere Pro via GrayMeta’s Curio Anywhere Panel Extension. Now let’s talk customization. Curio Anywhere allows you to refine models to recognize specific faces or objects. Curio Anywhere Face Training Interface Imagine the possibilities for your projects without days of training a model to find that person. Connectivity is key and Curio Anywhere nails it, whether it’s cloud storage or local data, Curio Anywhere has got you covered. And it’s not just about media files either. Documents are also part of the package. Here’s a major win: Curio Anywhere’s models are developed in-house by GrayMeta, and this means no excessive reliance on costly third-party analytical services. But if you do prefer using those services, don’t worry Curio Anywhere supports those too, via API. For those of you eager to delve deeper into GrayMeta and their vision, I had a fascinating conversation with Aaron Edell, President and CEO of GrayMeta. We talk about a lot of things, including Curio Anywhere, making coffee, and if the robots will take us over. 4. CodeProject.AI Server It’s time to get a little bit more low level with CodeProject.AI Server , which handles both analytical and generative A.I. Imagine CodeProject.AI Server as Batman’s utility belt.” Each gadget and tool on the belt represents a different analytical or generative A.I. function designed for specific tasks. And just like Batman has a tool for just about any challenge, CodeProject.AI Server offers a variety of A.I. tools that can be selectively deployed and integrated into your systems, all without the hassle of cloud dependencies. It includes object and face detection, scene recognition, text and license plate reading, and for funsies, even the transformation of faces into anime-style cartoons. CodeProject.AI Server Modules Additionally, it can generate text summaries and perform automatic background removal from images. Now you’re probably wondering, “how does this integrate into my facility or my workflow?” The server offers a straightforward HTTP REST API. For instance, integrating scene detection in your app is as simple as making a JavaScript call to the server’s API. This makes it a bit more universal than a proprietary standalone AI framework. It’s also self-hosted, open source, and can be used on any platform, and in any language. It also allows for extensive customization and the addition of new modules to suit your specific needs. This flexibility means it’s adaptable to a wide range of applications from personal projects to enterprise solutions. The server is designed with developers in mind. 5. Pinokio Pinokio is a playground for you to experiment with the latest and greatest in Generative AI. It unifies many of the disparate repos on GitHub . But let’s back up a moment. You’ve heard of GitHub, right? Most folks in the post-production space don’t spend much time on GitHub. It’s for Developers! It’s what developers use to collaborate and share code for almost any project you can think of. GitHub is where cool indie stuff like A.I. starts before it goes mainstream. Yes, you too can be a code hipster by using Pinokio and GitHub and you can experiment with various A.I. services before they go mainstream and become soooo yesterday . As you can imagine, if GitHub is for programmers, how can non-coding types use it? Pinokio is a self-contained browser that allows you to install and run various analytical and generative AI applications and models without knowing how to code. Pinokio does this by taking GitHub code repositories -called repos – and automating the complex setups of terminals, git clones, and environmental settings. With Pinokio, it’s all about easy one-click installation and deployment, all within its web browser. Diving into Pinoki ‘s capabilities, there’s already a list chock full of diverse A.I. applications from image manipulation with Stable Diffusion and FaceFusion to voice cloning and A.I. generated videos with tools like K and Animate Diff. The platform covers a broad spectrum of AI tools. Pinokio helps to democratize access to A.I. tools by combining ease of use with a growing list of modules as it continues to grow in various sectors. Platforms like Pinokio are vital in empowering users to explore and leverage AI’s full potential. The cool part is that these models are constantly being developed and refined by the community. Plus, since it runs locally and it’s free, you can learn and experiment without being charged per revision. Every week there are more analytical and generative A.I. tools being developed and pushed to market. If I missed one, let me know what they are and what they do in the comments section below or hit me up online. Subscribing and sharing would be greatly appreciated. 5 THINGS is also available as a video or audio-only podcast, so search for it on your podcast platform du jour. Look for the red logo! Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe. PSA : This is not a paid promotion.…
Is the embedded video not working? Watch on YouTube: https://youtu.be/UO6e_wMAp_0 In 2019, MovieLabs , a not-for-profit group founded by the 5 major Hollywood Studios released a whitepaper called “ The 2030 Vision ”. This paper detailed the 10 Principles that each of the studios agree is how they want to usher in the next generation of Hollywood content creation. What are these 10 principles, why should you care, and who the heck is MovieLabs? Let’s start there. 1. What is Movielabs? Paramount Pictures , Sony Pictures Entertainment , Universal Studios , Walt Disney Pictures and Television , and Warner Bros. Entertainment are part of a joint research and development effort called MovieLabs. As an independent, not-for-profit organization, Movielabs was tasked with defining the future workflows of the member studios and developing strategies to get there. This led to their authoring of the “2030 Vision Paper”. 2030 Vision Overview As Movielabs is also charged with evangelizing these 2030 goals, they’ve become the Pied Pipers for the future media workflows of the Hollywood studio system. Now, what most folks don’t realize is that a vast majority of Hollywood is risk-averse. Changing existing, predictable (and budgetable) workflows doesn’t happen until a strong case can be made for significant time and cost savings. Without a defined goal to work towards, studio productions would most likely continue on the path they’ve been on, and only iterating incrementally. So, defining these goals and getting buy-in from each studio meant that there would be a joint effort to realize this 2030 vision. It also gave all the companies who provide technology to the studios a broad roadmap to develop against. Essentially, everyone sees the blinking neon sign of the 2030 Vision Paper, and everyone is making their way through the fog to get to that sign. The 2030 Vision Paper is directly influencing the way hundreds of millions of dollars are – and will – be spent. So, what exactly does this manifesto say? 2. A New Cloud Foundation The 10 core principles of the Movielabs 2030 Vision Paper can be organized into 3 broad categories: A New Cloud Foundation , Security & Access , and Software-defined Workflows . The first half of these 10 principles relate directly to the cloud. This means that any audio and video being captured go directly to the cloud – and stay there. And it’s not just captured content, it’s also supporting files like scripts and production notes. Captured content can either be beamed directly to the cloud or saved locally and THEN immediately sent to the cloud. This media can be a mix of camera or DIT-generated proxies and high-resolution camera originals, plus any audio assets. As of now, an overwhelming majority of productions are saving all content locally. Multiple copies and versions are then made from these local camera originals. Then, the selected content is moved to the cloud. The cloud is usually not the first, nor primary repository for storage. Having everything in the cloud is done for one big reason: if all the assets – from production to post-production – are in a place that anyone with permission in the world can access, then we no longer have islands of the same media replicated in multiple places, whether it be on multiple cloud storage pools or sitting on storage at some facility. We’d save a metric [bleep]-ton of time lost by copying files, waiting for files to be sent to us, and the cost to each facility for storing everything locally. Of course, having the media in the cloud does present some obvious challenges. How do we edit, color grade, perform high-resolution VFX, or mix audio with content sitting in the cloud? That’s where Principle #2 comes into play. Most local software applications for professional creatives are built on the concept that the media’s assets are local, either on a hard drive connected to your computer or on some kind of network-shared storage. Accessing the cloud for all media introduces increased lag and reduced bandwidth that most applications are just not built for. Despite this, one of the many tech questions answered by the pandemic was “What is the viability of creatively manipulating content when you don’t have the footage locally?” The answer is that in many cases, it’s totally doable. Companies like LucidLink thrived by enabling cloud storage with media to be used and shared by remote creatives in different locations. LucidLink was the glue that enabled applications to come to the media and not the other way around. Now, in most cases, we still need to use proxies, rather than high-res files, for most software tools. Editing, iterating, and viewing high-res material that’s sitting in the cloud is still difficult unless you have a tailored setup. Luckily, we still have 7 years before 2030 – and advancements on this front are constantly evolving. Working with everything while it’s in the cloud also means that when it’s time to release your blockbuster, you can simply point to the finished versions in the cloud – no need to upload new versions. That’s where Principle #3 comes in. This means no more waiting for the changes you’ve made on a local machine to export and then re-upload, then generate new links and metadata, as well as the laundry list of other things that can go wrong during versioning. The cloud will process media faster in most cases, as you can effectively “edit in place”. The promise is that of time savings and less room for error. Moving on, Principle #4 gets a bit wordy, but it’s necessary. Archives generally house content that is saved for long-term preservation or for disaster recovery, and should rarely, if ever, be accessed. This is because archived material is typically saved on storage that is cheaper per Terabyte, but with the caveat that restoring any of that archived content will take time, and thus money. This philosophy is as true for on-premises storage as it is for cloud storage. It can simply be cost-prohibitive to have massive amounts of content on fast storage when access to it is infrequent. So, Principle #4 is saying that not everyone uses archives the same way, and if you do need to access that content in the cloud somewhere down the road, your archive should be set up in a way that makes financial sense for the frequency of access, while also ensuring that secure user permissions are granted to those who understand this balance. Principle #4 also dovetails nicely into a major consideration with archived assets. Ya know, making more money! Sooner or later, an executive is gonna get the great-and-never-thought-of-before idea to re-monetize archived assets. Whether it’s a sequel, reboot, or super-duper special anniversary edition, you’re going to need access to that archived content. Our next principle provisions for this. Having content in a single location that you’re allowed to see is one thing, but how you accurately search for and then retrieve what you need is, for now, a challenging problem. Let’s take this one step further: Years or even decades may have passed since the content was archived, and modern software may not be able to read the data properly. Some camera RAW formats, as the paper points out, are proprietary to each camera manufacturer and need to be DeBayered using their proprietary algorithms. So, do we convert the files to something more universal (for now) or simply archive the software that can debayer the files? We simply don’t have a way to 100% futureproof every bit of content. What I’m getting at here is that the content has to be fully accessible and actionable – all in the cloud. 3. Security & Access OK, Great, stuff is in the cloud, but how do I make sure that only the right people have access to it? Slow down, and take a breath, as Security and Access are one of the 3 areas of attention in the 2030 Vision paper. Peeling the onion layers back, this will require the industry to devise and implement a mechanism to identify and validate every person who has access to an asset on a production, which we’ll call a “Production User ID”. This would be for all creatives, executives, and anyone else involved in the production. This “Production User ID” would allow that verified person to access or edit specific assets in the cloud for that production. This also paves the way for permissions and rights that are based on the timing of the project. As the 2030 Vision Paper calls out, “A colorist may not need access to VFX assets but does need final composited frames; a dubbing artist may need two weeks’ access to the final English master and the script but does not need access to the final audio stems; and so on.” As Hollywood frequently employs freelancers – and executives change jobs – this also allows for users to have their access revoked when their time on the production is over. In theory, tying this to a “Production User ID” means all permissions for any Production you work on are administered to your single ID. You wouldn’t have to remember even more usernames and passwords, which should make all of you support and I.T. folks watching and reading this a bit happier. As you can imagine, accessing the content doesn’t mean it’s 100% protected while you’re using it. This is where I hope a biometric approach to security takes off sooner rather than later, as the addition of new security layers on top of aging precautionary measures is often incredibly frustrating. Multiple verification systems also increase the chance of systems not working properly with one another, which is a headache for everyone…and your IT folk. Movielabs suggests “Security by design;’ which means designing systems where security is a foundational component of system design – not a bolt-on after the fact. There is also the expectation that safeguards will be deployed to handle predictive threat detection and that hopefully this will negate the need for 3rd party security audits, which is a massive headache and time suck. The security model also assumes that any user at any time could be compromised. Yes, this includes you, executives. You are not above the law. This philosophy is also known as “Zero Trust”, and if you had my parents, you’d totally understand the concept of “zero trust”. The last Principle in the “Security and Access” section is #8: If you’ve done any kind of work with proxies, or managed multiple versions of a file, then you’ve dealt with the nightmare that is relinking files. Changes in naming conventions, timecode, audio channels, and even metadata can cause your creative application du jour to throw “Media Offline” gang signs. And this isn’t just for media files. Supporting documents like scripts, project files, or sidecar metadata files can frequently have multiple versions. This means that in addition to needing a “Production User ID” per person, we also need a unique way to identify every single media asset on a production; and every user with permission needs to be able to relink to that asset inside the application they’re using. That’s a pretty tall order. The upside is that you shouldn’t have to send files to other creatives – you simply send a link to the media, or even just to the project file, which already links to the media. The goal is also to have this universal, relinking identifier functioning across multiple cloud providers. The creative application would then work in the background with the various cloud providers to use the most optimized version of the media for where and what the creative is doing. Oh yeah, all of this should be completely transparent to the user. 4. Software Defined Workflows The last 2 principles are centered around using software to define our production and post-production workflows. This principle covers two huge areas. The first is a common “ontology” – that is, a unified set of terminology and metadata, as well as common industry API. Consider this: You’ve been hired to work on The Fast and Furious 27, and there is, unpredictably, a flashback sequence. You need access to all media from the 26 previous movies to find clips of Vin Diesel, err Dom, sitting in a specific car at night, smiling. Have the 26 previous movies been indexed second by second, so you can search for that exact circumstance? How do you filter search results to see if he’s sitting in a car or standing next to it? How do you filter out if the car is in pristine shape or riddled with bullet holes after a high-speed shootout with a fighter jet? As an industry, don’t have “connected ontologies” meaning, we can’t even agree that the term for something, say, in a production sense is the exact same thing in a CGI sense. If we can’t even agree on unified terms for things, how we can label them so you can then search for them while working on the obvious Oscar bait Fast and Furious 27? The second ask is a common industry API or “Application Programming Interface” that all creative software applications use. This allows the software to be built in such a way that it can be slotted into modular workflows. This is meant to combat compatibility issues between legacy tools and emerging workflows…and reduce downtime due to siloed tech solutions. While each modular software solution can be specialized, there will also be a minimum set of data, metadata, and format support that other software modules in the production workflow can understand. This also means that because of this base level of compatibility, all creative functions will be non-destructive. Wait, you may be thinking…how in the world is this accomplished? All changes made during the production and post-production process will be saved as metadata. This metadata can be used against the original camera files. This provides not only the ultimate in media fidelity but also the ability to peel back the metadata layers at any time to iterate. We wrap up Principle #10: I get this; no one likes to wait. When and if Principle #10 is realized, it means no more waiting for rendering, whether it be for on-set live visual effects, game engines, or even CGI in post. The processing in the cloud will negate the need to wait for renders. Feedback and iteration can be done in a significantly shorter amount of time. Creative decisions onset can be made in the moment rather than based on renders weeks or months later. The 2030 Vision Paper also suggests this would limit post-production time – as well as budget – as more of each would be shifted towards visualizations in pre-photography. 5. Are We On Track? Yes…ish? There are a ton of moving parts – with both business and technology partners – that Movielabs annually checks the industry’s progress towards the 2030 vision; and Movielabs provides a gap analysis current state of the industry and the work that remains. Without question, our industry has made advancements in areas like camera-to-cloud capture, VFX turnovers, real-time rendering, and creative collaboration tools that are all cloud-native. The Hollywood Professional Association – or HPA – Tech Retreat this year, which has become the de facto standard for must-attend industry conferences, showcased many Hollywood projects, and technology partners, like AWS, Skywalker, Disney Marvel. The 10 principles need a foundation to build upon, and that means getting this cloud thing handled first. And that’s just what we saw. Principles 1, 2, and 3 were the predominant Principles that large productions attempted to tackle first, with a smattering of “Software Defined Workflow” progress. However, these case studies also reflected where we do need to make more progress, as we are already 4 years into this 2030 vision. These gaps include: Interoperability gaps, where tasks, teams, and organizations still rely on manual handoffs, that can lead to potential errors and inefficiencies. Custom point-to-point implementations dominate, making integrations complex. Open, interoperable formats and data models are lacking. Standard interfaces for workflow control and automation are absent. Metadata maintenance is inconsistent, and common metadata exchange is missing. We also have gaps in operational support, where workflows are complex and often span multiple organizations, systems, or platforms. There’s a gap between understanding cloud-based IT infrastructures and media workflows. All files are not created equal and media needs specialized expertise over traditional IT. Support models need to match the industry’s unique requirements, considering its global and round-the-clock nature. If we take a look at Change Management, we do run into fundamental problems with a move of this scale. This technology is new, and constantly evolving, and it’s only being spearheaded by the studios and supporting technology partners. This means few creatives have actually tried and deployed new ‘2030 workflows.’ Plus, managing this change involves educating stakeholders, involving creatives earlier, plus demonstrating the 2030 vision value, and measuring its benefits. I’m sure you have some input on one or more of these 5 THINGS . Let me know in the comments section. Also, please subscribe and share this tech goodness with the rest of your techie friends. 5 THINGS is also available as a video or audio-only podcast, so search for it on your podcast platform du jour. Look for the red logo! Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe. Editing & Motion Graphics: Amy at AwkwardAnthems.…
Not too long ago, I posted on several social media social platforms asking what questions YOU had on AI. Reddit | Facebook | Twitter | LinkedIn I’ve taken all of the 100+ responses, put them in nice neat little (5!) categories, and – wow – do I have a deluge for you. 1. Current AI Tools By far, the most asked question was “what tools are available today?” The easiest life is that of automated content analysis. For a hundred years, we’ve relied on a label on a film canister, a sharpie scrawl on the spine of a videotape case, or the name of a file sitting on your desktop. Sure, while this tells you the general contents of that media, it’s not specific. “What locations were they at?” “What was said in each scene?” “Were they wearing pants?” This is where AI’s got your back. With content analysis, sometimes called automated metadata tagging, AI can analyze your media, recognize logos, objects, people, and their emotions, plus, transcribe what was said, and generate these time-based metadata tags. It’s a little bit like having Sherlock Holmes on your team, solving the mystery of the missing footage. Because there is always a better take somewhere… .right? AI-assisted color grading is also starting to gain traction. With the help of AI models, colorists can quickly analyze the color palette of a video and create a consistent base grade. This can save a ton of time when you’re dealing with various bits of media with different formats and color profiles. It does the balance pass so you can start to create …quicker. AI can also suggest starter options for more creative color grades. Like…adding a touch of “teal and orange”. Colorists can also use existing content as an example to tell the AI the look you’re aiming for. Check out Colourlab.ai. Additionally, AI tools are revolutionizing audio post-production, too. Noise reduction algorithms powered by AI can effectively remove unwanted background noise, enhance dialogue clarity, and improve overall audio quality. While this tech has been around for a while, adding AI models to existing algorithms is making these tools even more powerful. These models also help with Frankenbiting. Text-to-speech tools that also allow generated audio to sound like someone else – also called voice cloning – is a fantastic tool for Frankenbiting. For most of us commercially, it’s great right now for scratch narration or to replace a word or two, but it’s not quite at the point where you can easily adjust intonation. However, when that is possible, editors may possibly have a new skill to learn – crafting the performance of the voice-over in the timeline. Check out Elevenlabs. Text-to-image and text-to-video products are rapidly being developed, just look at services like Runway , Midjourney , Dall-E , Kaiber , and a host of others. And while we can’t generate believable video b-roll on the fly (unless you like the current generative AI aesthetic) once that boundary is crossed, this will change the way editors work. OK, instead of adding…can we subtract? Rotoscoping tools are readily available in industry-standard tools like Photoshop. We can erase objects in near real-time and also use tools like Generative Fill for AI to “guess” what is missing right outside of the frame. Now, Large Language Models – or LLMs – are the key for AI to understand what you want when you ask it a question. LLMs are also imperative for our next skill….text to code. Now, why is that important to you? Many motion graphics software packages have an underlying code base you can use to script the actions you want. So, until text-to-video becomes more mature, you can use AI to generate code and scripts to tell the motion graphics tool what you want to do. Check out KlutzGPT . All of the aforementioned model types are just a sampling of the broad categories of specific AI models. In fact, HuggingFace , a repository for various AI models and datasets, has nearly 40 categories of models for you to download, train, and experiment with to assist you on your next project. 2. Adapting to AI Evolution “It is not the strongest species that survive, nor the most intelligent but the ones most adaptable to change” , once wrote Leo Megginson. And while he’s not wrong, I much prefer my attention span appropriate “Adapt or Die” , by Charles Darwin. AI and its many variants are exploding. AI has already become the fastest-adopted business technology in history . So, how can you adapt to this? Well, the first step is to not panic. DON’T PANIC. If you’ve used any type of AI, you can totally see the current deficits. Whether it’s factual hallucinations or relatives with 20 fingers and 37 toes, AI will be evolving, which gives you time to learn and use AI tools as your sidekick, and not your replacement. As AI for the general public is still in its infancy, it’s critical for us to learn about the AI models we’re utilizing. This means what data was used to train them, and who has curated that same data the models were trained on? This level of openness not only empowers us to make better decisions when selecting model providers but also fosters a culture of responsibility. We’ll discuss this a bit more later. As with any moment in the zeitgeist, it’s imperative that we understand the difference between what’s the real deal, and what’s part of the hype machine. We’re likely to see the “AI” label slapped on many tools. Now, you may not think this is a big deal, but it can be. To start off, it’s a matter of investment and value. You want to ensure you’re getting the whizzbang AI capabilities you’ve paid for, and not just a repackaged widget marketed as AI. Genuine AI tools can also perform complex tasks, adapt to new situations, and sometimes even learn from those experiences, leading to more efficient processes and smoother outcomes. Non-AI tools or overhyped plugins, however, may not deliver their promised results, leading to major disappointment and potential setbacks in your project or business. Plus, understanding the capabilities of AI tools can lead to better usage. Knowing what an AI tool can and can’t do allows you to utilize it to its fullest potential. On the other hand, blatantly mislabeling a non-AI tool as AI can lead to wasted time or underuse. Lastly, it’s really about ethics and transparency. Misrepresenting a tool’s capabilities is deceptive marketing, plain and simple. This is already a major problem – and a rarely policed one at that – in the tech world. It erodes trust between providers and users and can lead to skepticism about the entire field of AI. Now, speaking of Ethics…. 3. Ethics in AI Usage As we leverage AI to push the boundaries of creativity, we’re faced with new dilemmas, around privacy, security, bias, and yes, our core ethics. These aren’t just questions for the tech wizards or the philosophical ponderers. They’re issues that each one of us, as contributors to the creative world, needs to grapple with. Why? Because the decisions we make today will shape the digital landscape of tomorrow. If we start with your creative tools, such as the ability for voice cloning and face swapping – which do have legitimate applications in our industry – they also raise ethical issues such as privacy, consent, authenticity, and accountability. For example, voice cloning and face swapping could be used to create fake news, deep fakes, or malicious impersonations that could harm the reputation or even the safety of the original speakers or actors. Plus, bad actors could undermine the trust and credibility of the creative content and its sources. To solve this, we need explicit consent from the original person before their voice or face is used. No more unsolicited ‘borrowing’. These technologies aren’t a license to steal. Next, we need transparency. Any content using these technologies should have some form of credit or disclaimer, to ensure the audience knows what’s up. Finally, accountability and traceability are key. Keeping track of source data and synthetic outputs ensures responsible use. This means finally deciding on and implementing some form of chain of custody solution, such as the content authenticity initiative. In essence, we’re talking about a culture of responsibility in AI usage, balancing the scales of creativity and ethics. From a macro perspective, data privacy and security have become increasingly critical ethical concerns within the AI sphere. As AI systems extensively rely on vast amounts of your personal data, the issues of data ownership and protection have exploded. To handle this, it’s imperative to establish and enforce stringent data protection guidelines. The problem is, we’ve already been using the internet for a few decades now, and a good chunk of your data is already out there. That forum terms-of-service that you just scrolled past and clicked I ACCEPT on, has your data, which they can potentially monetize in any number of ways – including using it to train new AI models. Now, ethical online services could certainly provide users with a user-friendly opt-out option, particularly for AI services that are deemed excessively invasive. Similar to the CAN-SPAM Act in the US and GDPR regulation in the UK, this could allow users to opt out of the use of any of their data to be used in training AI models. There are several generative AI cases currently making their way through the U.S. legal system, including lawsuits against Stability AI, Midjourney, and DeviantArt, who are accused of mass copyright infringement . Microsoft, which owns GitHub, and OpenAI are also being sued over Microsoft Copilot’s tendency for reproducing developers’ publicly posted, open-source, licensed code . Keep tabs on these cases as they will shape how the combined art you make with AI is recognized. Another critical point to consider is the potential for bias and discrimination in AI. The danger here is that biased data can lead to AI systems that further perpetuate those biases. The key to breaking this cycle is ensuring that the data used to train AI is diverse and unbiased and captures multiple perspectives. It’s also essential to regularly monitor AI systems to identify and rectify any biases that may sneak in. Mandating AI model providers to publish their datasets to the public won’t fly, which means some form of audit by a 3rd party. The immediate thought here is some form of regulatory body, which I really can’t see a way around. Transparency in AI algorithms is also an integral part of ethical AI use. It’s important for creators to understand how and why AI systems make certain decisions. As AI for the masses is a relatively new technology, we all need to become educated on the models we’re using. This kind of transparency can lead to more informed model provider choices and quite frankly encourages a sense of accountability. Update: OpenAI, Google, others pledge to watermark AI content for safety, White House says. 4. Societal Implications of AI The broader societal implications of AI in the creative industry extend well beyond your editing fortress of solitude. As AI continues to transform workflows and redefine job roles, it is crucial to support your fellow creatives in adapting to these changes. One way to support creatives is through continued education and upskilling programs. As AI tools become more prevalent, editors should embrace lifelong learning and acquire new skills to stay relevant in our industry. By developing a deeper level of understanding of AI technology and its applications, editors can leverage AI to their advantage and remain valuable contributors to the creative process. As Phil Libin , creator of Evernote and mmhmm , and current CEO of All Turtles said: “AI won’t replace any humans, humans using AI will.” Phil’s quote, but my presentation – LACPUG May 2023 Like it or not, Democratic governance of AI is also crucial to address potential risks and ensure responsible AI development and usage. Transparent regulations and ethical guidelines can help shape the future of AI in a way that aligns with our societal values and prevents misuse. I know, that last statement is loaded with several gotchas, but I really don’t see another option. Washington has historically played catch up on all things technology, but the current administration has published the “AI Bill of Rights” which speaks to many of these topics. However as of now, they are not enforceable by law, and there aren’t any federal laws that explicitly limit the use of artificial intelligence, or protect us from its harm. 5. AI Evolution & Impact At its core, what we do in post-production is all about storytelling, and AI, as fascinating as it is, is merely another tool in our toolbox. It’s true, AI can analyze data, identify patterns, and even suggest edits. Heck, it can generate content based on predefined parameters. But, no matter how advanced it becomes, it lacks the innate understanding of the human condition and the emotional and cultural context that you, as artists, possess. To be clear: Don’t believe the job loss hype. AI is not about to take over our jobs. Please don’t fall for the classic “Lump of Labour” misconception that automation kills jobs. It’s simply not true. Technology serves as the spark for productivity enhancement, making people more efficient in their work. This increased efficiency triggers a chain reaction: it drives down the costs of goods and services and pushes up your wages. The net result is a surge in economic growth and job opportunities. But it doesn’t just stop there. It also inspires the emergence of fresh jobs and industries; ones that we couldn’t have imagined just a few short years ago. In the face of evolving AI technology, the role of editors is not diminishing…but transforming. You are the bridge between the cold calculations of AI and the warmth of human connection. We infuse videos with a depth of storytelling that resonates with audiences, touches hearts, and sparks imaginations. AI cannot replace your creative intuition or your storytelling skills. It’s the human touch that adds the emotional depth, the nuanced transitions, and your profound connection with viewers. Remember that, my fellow creatives, and let’s shape the future of our industry together. I’m sure you have some input on one or more of these 5 THINGS . Let me know in the comments section. Also, please subscribe and share this tech goodness with the rest of your techie friends. Even if they are AI. 5 THINGS is also available as a video or audio-only podcast, so search for it on your podcast platform du jour. Look for the red logo! Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe. Editing & Motion Graphics: Amy at AwkwardAnthems.…
Today we’re going to take a deep dive into the various ways to edit remotely with Adobe Premiere Pro , when you or your team, plus your computers and media just can’t be in the same place at the same time. Let’s get started. 1. Team Projects Team Projects in Premiere Pro – as well as for After Effects – has been around for quite a while. And it’s free! Team Projects is Adobe’s solution for creatives who each have their own local edit system– either at the office or at home – and a local copy of the media attached to that edit machine. Meaning, no one in your team is using any shared storage. Everyone accesses a Project that Adobe hosts in the cloud. And because Creative Cloud is managing the Team Projects database, versions and changes are tracked. Overview of Adobe Team Projects Of course, this workflow does require discipline, including organizing media carefully and utilizing standardized naming conventions. But once you’re in the groove, Team Projects is very easy to use. Let’s take a quick look, so you can see how the flow goes. You can start the process when you have Premiere Pro Open. Give the project a name, and then add Team members as collaborators with their email addresses. They’ll get a desktop notification through Creative Cloud that they’ve been added to the Team project. Be sure to check your scratch disks on the 3 rd tab correctly – as every team project editor will be saving their files to their own local storage. Let’s fast-forward till we have an edit we want to share. In your Team Project pane, you’ll see a cute little arrow at the bottom right of the pane, that tells you to share your changes in the team project. Don’t worry if you forget, if you look sequence name tab, and see an arrow, that’s a reminder to share and upload your changes. Click the “Share My Changes” button, and you’ll see all the stuff you’ll be sharing with your team members. Add a comment if you wanna summarize what you did. Click “Share”. Premiere Pro will then upload your changes to your Team’s Creative Cloud account. Your Team members will then open the Team Projects they’ve been invited to through the Creative Cloud Desktop app. Don’t worry – if any media files are marked offline in Premiere, team members can either relink to that media if they have it locally, or you can download them from within Premiere Pro via the Media Management option in Premiere. As you can see, this is where the aforementioned Scratch disks, media management, and organization really come into play – or else you’ll be relinking all day. Now, this is just the project, sequences, and media. What about Review and Approve with your Team? With the Adobe acquisition of Frame.io last year , Team Projects now adds review and approval capability inside Teams Projects. Despite the fact Team Projects has been around for a while, it’s still an excellent solution that is already part of your Adobe Creative Cloud subscription, so there is no extra cost to test it out. 2. Adobe Productions Adobe Productions is the evolution of the “Shared Projects” feature Adobe rolled out in 2017 and it differs significantly from Adobe’s Team Projects. While the workflow with Team Projects expects each user to have their own local workstation and local copies of the media, Productions operates assuming that everyone on your team is already connected to the same shared storage. This would be akin to having all your creatives in the same building, and all of them mounting the same NAS or SAN volumes to their machine. Because everyone can access the same media at the same time, there is no need to copy media back and forth from your local drive to the shared drive. It saves a ton of time. Overview of Adobe Productions The trick to make this work is to have a software “traffic cop” working in the background and watching over every user opening what Adobe calls a “Production”. A Production is essentially a sup’d up Premiere Pro project file that points to several smaller project files. The smaller project files are what each editor works with, and all changes are saved to the smaller projects within the primary Production. The software traffic cop in Premiere allows users to work on a Project within the Production – and locking or unlocking the Project when needed to prevent over-writes or corruption of the files. We start with starting a new Production within Premiere Pro. We set a folder, and thus any Projects put within this Production folder will be part of the production. This is stored on the single shared storage volume everyone is using, and with the same media repository, everyone is reading from. Now, when you load up the project within the production, you might notice the layout looks a bit odd. Let me demystify it for you. The various icons and their function inside an Adobe Production We’re used to “Bins” in Premiere Pro Projects. In Productions, we call it a “Folder”. Also, in Premiere Pro, we’re used to seeing “sequences.” In Productions, we call them “Project Files”. I know that takes some getting used to. What will help, is that the Production pane in Premiere looks a lot like your folder structure at the desktop level. So, unlike Teams, where we have the media organization on disk, which can be different than the media organization within your project, Productions more closely resembles the “what you see is what you get” method when you organize at the OS level. Adobe Productions on disk and inside Premiere Pro We only have time for a few features here, but those who have worked in other collaborative editorial software will recognize some of the features of Productions – such as green and red icons, which tell you at a glance if the project is not in use and you can edit project or not. If this organization sounds familiar, it is. While Productions was a huge new feature for Adobe, this workflow was popularized prior by Avid, and their Media Composer software connected to their proprietary and 3 rd party shared storage. You’ll also notice that an Adobe Production is much like an Avid Project file, in that an Adobe Production links to Adobe Project files in very much the same way an Avid Project links to Avid Bins. 3. All in the Cloud The future, my friends, is here and Adobe has embraced it for many years now. When deployed properly, all your Windows workstations running Premiere Pro in the cloud connect to a pool of shared storage, which is also in the cloud. As every editor has access to the same shared storage at the same time, the experience mimics that of the on-premises Productions workflow we talked about a few short minutes ago. Overview of using Premiere Pro in the cloud Now, you have the option of selecting and configuring your cloud workstations and appropriate cloud-shared storage, plus 3 rd party collaboration tools for your cloud editing fortress, or you can use one of several companies that have already built that fortress so all you have to do is move your stuff in. If you’re rolling your own cloud deployment, we have Adobe’s guide on Creative Cloud deployment on Virtual Desktop Infrastructures . These guides spell out requirements and best practices for deploying on AWS , Azure , and GCP platforms. Close attention should be paid to Adobe’s notes talking about what types of machines to use, what types or tiers of cloud storage to use, what screen sharing protocols to use, and what workflows simply aren’t ready for prime time, like high fidelity color grading or surround sound audio. These guides provided by Adobe aren’t meant as a cookbook, but rather as bullet points for you cloud folks to get busy. For all-in-one solutions, I can’t recommend BeBop Technology highly enough. They are by far the most advanced and mature platform for using Adobe in the cloud. With fast shared storage, enterprise security, plus on-demand and live review and approve tools. All you do is bring your software licenses, and BeBop does the rest. Plus, you pay the actual cloud costs, not marked-up costs by a 3 rd party. As someone who chose to work for BeBop for nearly 4 years because of just how far ahead they were than anyone else, this is by far the best all-in-one solution. Other popular solutions include AWS’s Nimble Studio , albeit with some caveats on availability and features around the world, and that you’re pretty much on your own to administer it. AWS also has its Workspaces solution, but the Workspaces feature set really isn’t robust enough for post-production. A slew of other companies has popped up, who simply tie into the APIs of various clouds and then present you a web front end so you can easily spin up and spin down machines. These are normally for VFX rendering or single-user usage, rather than real-time collaboration with shared storage and teams of users…all in the cloud. You will need to accept the truth that doing anything in the cloud will have fluctuations in cost from month to month. Working entirely in the cloud is not an “all-you-can-eat” for one price. The cloud is pay-as-you-go, and every workstation hour, server hour, and every MB stored or downloaded costs money. Expect several hundred dollars per editor in total cloud costs per month – even before your Adobe software costs. It is, however, cheaper than renting a local edit and storage rig. 4. Remoting into the Office I’m sure many of you have gotten into the groove of remoting into a system. That is, working from home in front of a terminal or workstation, and connecting to an editing machine back at the office. This is often the first thing facilities try when going remote because your team can use the same editing computers and storage they’ve always used. And it can work well if your home network and the office infrastructure are robust enough. Overview of remoting into the office This normally involves the facility creating a VPN – or Virtual Private Network – on the company’s firewall so editors outside the four walls of the facility can reach their machines behind the firewall. It also means the firewall will need to have specific ports opened, so screen-sharing protocols can do their screen-sharing thing. One of the top screen-sharing solutions that Adobe, Avid, and others support is Teradici’s PCoIP or PCoIP Ultra protocol , now owned by HP . Great quality, killer security, and is dead simple for the end user. Parsec , now owned by Unity , is another flexible solution with a ton of user controls. I suggest staying away from screen-sharing protocols that are meant for IT-type usage, like Team Viewer, Apple or Microsoft Remote Desktop, and VNC, just to name a few. True, they are inexpensive but are not optimized for sync audio and video, color fidelity, or full frame rates. Now, we have to briefly talk about the laws of Physics. Due to the law of physics, you can’t be on the other side of the planet from your remote editing machine and expect to have a pleasurable editing experience. With few exceptions, ya gotta be on the same continent, and within about 1500 miles of the workstation to keep your sanity. For you ping nerds out there, ideally you wanna be under 60ms or 70ms roundtrip. And of course, the lower the ping number, the more likely you are to have a pleasurable editing experience. 5. Hybrids As is the case with most so-called “standard” workflows, there are always hybrids. Hybrid methodologies address unique remote and technology scenarios, they may also address other gaps in the facility at the same time, or simply make workflows even more flexible. And while I can’t cover every single permutation, here are a few to be on the lookout for. My first hybrid tool is LucidLink . LucidLink is cloud storage that understands the difference between the metadata of media, and the media itself, coupled with smart caching. Using that approach, LucidLink can grab the media Premiere Pro needs, but only grab *that frame* of media that is being requested. This means you don’t have to download the entire file in order to edit with it. Plus, that file caches on your system and will be played automatically when the NLE needs it instead of going back to the cloud. It’s awesome. LucidLink can work with on-premises edit machines, or with cloud VMs making it a great add-on for increased flexibility. Next, we have Postlab , which is part of Hedge software. Postlab gives Premiere Pro editors the ability to sync project files, tasks, bookmarks, and media stored in Postlab drive to editors anywhere in the world using Postlab’s cloud servers. Postlab also locks Premiere Pro projects when in use by another team member, so nothing gets corrupted or lost. Moving up the food chain, Asset Management could be a solution for you. Everyone struggles with how to organize, catalog, search, and retrieve their media. Well, what if you had an asset management system that could handle those chores *and* stream proxies from your storage at the office or in the cloud to your local computer running Premiere Pro at the same time? It’s much like Adobe’s former enterprise solution Adobe Anywhere , which, as luck would have it, was the first ever 5 THINGS episode way back in 2014 . IPV’s Curator product has this feature as an add-on, as does Arvato Bertelsmann’s Edit Mate product . Both are enterprise, and both are meant for much more than remote editing. Have more remote editing with Adobe concerns other than just these 5 questions? Ask me in the Comments section. Also, please share this tech goodness of this *entire* series with the rest of your techie friends. 5 THINGS is also available as a video or audio only podcast, so search for it on your podcast platform du jour. Look for the red logo! Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe.…
All of you are asking the same thing. “How can I edit remotely or work from home?” Today we’ll look at Avid , as they have many supported options, so you can cut with Media Composer from just about anywhere. Let’s get started. 1. Extending Your Desktop The first method we’ll look at is simply extending your desktop, that is – having your processing computer at the office, while you work from home and remote into that machine. This has been the crutch that most facilities have relied on in the past few weeks. Let’s examine how this works. First, this scenario assumes that you edit at a facility, where all of the networked computers and shared storage are…and that you can’t take any of those things home. This can be due to security, or other concerns like needing access to hundreds of TB of data. In this case, the creatives are sent home, and I.T. installs a remote desktop hardware or software solution on each of the machines. The creatives then connect through a VPN or virtual private network – to gain secure access from their home editing fortresses of solitude back into the facility and attempt to work as normal. Now topically this sounds like a real win-win, right? You get access to your usual machine and usual shared storage. Sure, you lose things like a confidence monitor (if you had one), but you should be fine, right? The devil, as always, is in the details. Typical screen sharing software solutions that are installed on your office editing machine are often dumpster fires for creatives. I’m not saying they are bad for general I.T. use, or when you need to remote in and re-export something, but by and large most screen sharing protocols do not give a great user experience. Full frame rate, A/V sync, color fidelity, and responsiveness usually suffer. Solutions like Teamviewer , Apple or Microsoft Remote Desktop , VNC , or most any of the other web-based solutions fail. Hard. You’ll pull all of your hair out before you finish an edit. Moving up to more robust solutions like HP’s RGS – Remote Graphics Software – or a top of the line solution like Teradici’s PCoIP software – is about as good as you’re gonna get. The license cost may cost a few hundred dollars, too…depending on your configuration. But here’s the kicker. They’re Windows only as a host. While you can access the computer running the Teradici software with a macOS or Windows equipped computer – or even via a hardware zero client – the environment you create in will always be a Windows OS. Quite unfortunately, there does not exist a post-production-creative friendly screen sharing solution for macOS. The only solution I’ve come across over these many years is a company called Amulet Hotkey – yes, that’s their name – who take the Teradici PCoIP Tera2 card and put it into a custom external enclosure and add some secret sauce. You then feed the output of your graphics card, plus your keyboard, mouse, and audio into the device and the PCoIP technology takes over. It’s quite frankly the best of both worlds: PCoIP and macOS. This ain’t gonna be cheap. Expect a few thousand dollars per hardware device, and availability at the moment may be difficult. You also going to need to do some network configuration for Quality of Service , and then decide how you’re going to “receive” the screen share at home, either on a laptop/desktop or with a zero client . 2. Islands of Media There is no doubt that as a Media Composer user, you’ve already tried this. It’s the easiest and least expensive way to have multiple users working on Avid projects at the same time while not in the same building. Wait a second. Let’s back-up and review how this works before we jump into the nitty-gritty. We start off as we did before, with everyone working at the facility on networked computers and shared storage. We then begin to replicate the media from the facility on to portable drives. Obviously, this could be a security risk and could potentially break the contract your facility might have with the content providers. But that’s a soapbox I’ll get on later in the episode. Replicating the media will require some serious media management. This means an accounting system to track who has what media – as well as a tight versioning schema. You’ll need to look at syncing schemas to get new media out to users, unless the editor is coming back to the facility for updated content. This may or may not also include someone at the facility creating lower resolution proxies so editors can go home with a few TBs of media instead of dozens of TBs of media. Often this may include watermarks on the media as yet another level of accountability in the event of a leak. Once that media is at home with the editor, there has to be a standardization on folder organization, naming conventions, and an extreme amount of attention paid to media management. Avid has always had a fantastic way of project management. Projects link to any number of bins, and those bins contain sequences, and both point to media. This means bins are small files that can be emailed or dropboxed or otherwise shared with other users quite easily. Provided each user has the appropriate media, Media Composer can be coaxed into relinking to the media when a new bin is loaded. This workflow does present some gotchas. Any rendered files will most like need to be re-rendered on each machine, and multiple users can’t work on the same bin or at the same time with the usual red lock/green unlock ability, so there does have to be some communication so as not to mess up someone else’s work. Avid Bin Locking – Red Lock & Green Unlock Obviously you’ll need to have a computer with a licensed copy Media Composer, plus the plugins you may need. Maybe you’ll have an awesome company like the reality post facility Bunim Murray who sent the editors home with their edit bay computers. Or, maybe you’ll have to grit your teeth with that old laptop collecting dust in the garage. Or, maybe – just maybe – you have the time and budget for the next few solutions. 3. Virtualization and Extending: Media Composer | Cloud VM This next solution is new-ish to the Avid family and has traditionally only been something you did within the 4 walls of your facility, and out to your edit bays – not to your home. Avid Media Composer Cloud | VM BTW, trivia tidbit – the vertical line you see in many Avid naming conventions? It’s called a Pipe. Make of that what you will. Media Composer Cloud | VM involves investing in a stack of servers at your facility and running VMs, or virtual machines, on these stacks of servers. On these VMs, a specially licensed copy of Media Composer Ultimate runs. Now, typically this is done so only everyone within the facility can access their computers, Media Composer, and shared storage from anywhere in the facility, and I.T. can administer everything from 1 location. No need to have computers in each of the bays. It’s sort of like the old tape room methodology. As everyone is in the same facility, latency is cut down, plus, using tools like Teradici’s PCoIP software solution give the user a very fluid creative experience. Recently, some facilities have been asking the question: “if our users are simply remoting into the VMs while they’re here in the office, why can’t they do the same at home?” As you can imagine, quality of service and the user experience are paramount in the Avid world, so this was usually discouraged. In fact, as of this video, Avid still only recommends this for users within a facility. But that hasn’t stopped folks from trying it and using it. So, users go home, and on their laptop or desktops, they load up their Teradici PCoIP software client, and connect via a VPN back to the facility mothership and continue working – much like we covered in option #1: Extending your desktop. There are a few caveats, however. This is not a “light ’em up tomorrow” solution. This requires specific servers, specific switches, and specific builds. Expects tens of thousands of dollars. It also requires upgraded licenses…not just for Media Composer, but many 3rd party software solutions and plugins either don’t handle virtualization well or want to charge you for the privilege. And playing with 3rd party storage may not be pleasant. And since you use the Teradici PCoIP handshake to access the VMs, the virtualized environment is *only* Windows. Ahh, ahh – stop it. There’s no crying in tech. Teradici PCoIP, while a fantastic protocol, does have some limitations when it comes to creatives. Higher-end color grading may be difficult, as PCoIP is limited to 8bit viewing, although the newer Ultra variant does have 10bit capability. Audio is usually limited to stereo playback, and don’t expect a confidence monitor output – it’s just the computer GUIs. But for most editing purposes, it’ll work just fine. 4. Streaming Proxies: Media Composer | Cloud Remote This solution has actually been around for many years, but it was just called something different and was mainly found within news type deployments. The premise is pretty elegant. If Avid MediaCentral UX – formerly branded as Interplay – was already your asset management du jour and managing and tracking your media at your facility, why couldn’t it serve up that media to you on-demand wherever you were? And thus, Avid devised what is now known as Media Composer | Cloud Remote . Media Composer Cloud | Remote , when coupled with MediaCentral | Production Management , allows servers at your facility to serve up real-time proxies of your on-premises media out to your remote machines. The end-user has a full version of Media Composer and connects over a VPN back to the facility. The user checks out a project from the server, which is linked to media on Avid NEXIS shared storage back at the facility. When the local version of Media Composer attempts to access media, the local software retrieves that media from the server at the facility and streams a low res version in real-time to your local machine. You are literally playing media in your timeline that’s being served up from hundreds or thousands of miles away – all within the familiar environment of Avid. Here’s the best part: Your local copy of Media Composer can run on either Windows *or* macOS. Avid Media Composer Cloud | Remote also has the added bonus of allowing remote users to work with their local media and then uploading in the background that local media back to the facility and checking it into the shared storage so others can access that media. It’s a pretty slick solution. However, Cloud Remote is simply a software option on top of Avid’s MediaCentral | Production Management solution. Meaning: remote editing with streamed proxies is not the main selling point of the system – it’s the asset management, automation, and collaborative features that drive companies to invest in it, with remote editing as an add-on. It’s like buying a house because it has a really cool garage. It’s also not cheap, nor easy to administer. You’re going to need an Avid ASCR to handle it, plus stacks of servers and storage. Expect over $100K to get going. Once configured however, it’s tech elegant and very cool. 5. Computer, Storage, and Media in the cloud: Avid | Edit On Demand Edit on Demand was a relatively quiet beta solution by Avid, and it’s only really been viable for the past year. It’s available in early access and “by-request-only”. In essence, Avid | Edit on Demand is accessing Media Composer software that is running in a public CSP – or cloud service provider. In this case, Microsoft Azure . The application, the NEXIS storage, and everything you need to edit with is in the cloud – and you access it via a laptop, desktop, or a zero client. It’s very similar to the method I talked about earlier: Virtualizing and Extending with Media Composer | Cloud VM. The difference here is that everything is in the cloud – not at your private facility. Let’s take a look. We’ll start where we’ve started every workflow: the traditional edit-at-a-facility. We then take that infrastructure and virtualize it within the Microsoft Azure cloud – and only on Windows desktop. Editors get to use Media Composer Ultimate and they get access to NEXIS storage, so bin locking and project sharing works as you would expect it to. It’s then business as usual. Like the other virtualized scenarios, this solution is also built on Teradici’s PCoIP, so the user experience is fantastic. Avid has also partnered with File Catalyst to enable users to upload and download content to and from the cloud NEXIS. This type of solution allows productions to scale up and down quickly as needed and plays right into OpEx business models…which the film and TV industry is famous for. At last check, Avid was able to get these workstations and environment ready in under a week. And because it’s in the cloud it allows users to work all around the world. The usual caveats apply: there is no video monitor output currently, and audio is still limited to stereo audio. And no, Pro Tools is not supported in any cloud. It’s also not easy to get other applications installed on the workstation or direct administrator access to the NEXIS storage. Pricing starts around $3000 per user per month, assuming under 200 hours of workstation a month. You can also buy hours and users in bundles, as well as TBs of storage. And while this price is a little steep for freelancers, it certainly is the fastest way to get to scale quickly and without a large CapEx investment. Bonus Editorial While available technology certainly frames what can and cannot physically be done, we’re still hampered by stagnant security restrictions and guidelines, and outdated work environment expectations. Many of the security requirements that are jammed into contracts are outdated and have not been revised, namely because no one wants to question it and potentially lose a large client. The looming worry of a hack, which is more often than not simply a password that was socially engineered, creates paranoia and the stacking of security protocols on top of more protocols. Mark my words, you’re gonna see a serious revamp of security recommendations after this is over, and quite frankly, before this is over. I’m also hopeful that once these security guidelines are revisited and revamped, that facilites begin to realize that if you hired someone to represent your company, your brand, and generate the work that gets you paid, you’d trust them to work remote in some capacity. Have more Avid Remote Editing concerns other than just these 5 questions? Ask me in the Comments section. Also, please share this tech goodness of this *entire* series with the rest of your tech friends…. they’ve got nothing else better to do right now. Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe.…
On this episode of 5 THINGS , we gonna get hiiiiiigh! In the clouds, with a primer on using the cloud for all things post-production. This is going to be a monster episode, so we better get started. 1. Why use the cloud for post-production? We in the Hollywood post industry are risk-averse. Yes, it’s true my fam, look in the mirror, and take a good hard look and realize this truism. Take the hit. This is mainly because folks who make a living in post-production rely on predictable timetables and airtight outcomes. Deviating from this causes a potentially missed delivery or airdate, additional costs on an already tight budget, and quite frankly more stress. The cloud is still new-ish, and virtually all post tasks can be accomplished on-premises. So why on earth should we adopt something that we can’t see, let alone touch? Incorporating the cloud into your workflow gives us a ton of advantages. For one, we’re not limited to the 1 or 2 computers available to us locally. This gives us what I like to call parallel creation , where we can multitask across multiple computers simultaneously. Powerful computers. I’m talking exaFlops, zettaFlops, and someday, yottaFlops of processing power….and have more flopping power than that overclocked frankenputer in your closet. Yeah, I said it. Flopping power. It’s also mostly affordable and getting cheaper quickly. To be clear, I’m not telling you post-production is to be done only on-premises or only in the cloud, most workflows will always incorporate both . That being said, the cloud isn’t for everyone. If you have more time than money, well then relying on your aging local machines may be the best economical choice. If your internet connection is more 1999 than 2019, then the time spent uploading and downloading media may be prohibitive. This is one reason I’m jazzed about 5G …but that’s another episode. Now, let’s look at some scenarios where the cloud may benefit your post-production process. 2. Transfer and Storage Alright, let’s start small. I guarantee all of you have used some form of cloud transfer service and are storing at least something in the cloud. This can take the form of file sharing and sync applications like Dropbox , transfer sites like WeTransfer , enterprise solutions like Aspera , Signiant , or File Catalyst , or even that antiquated, nearly 50-year-old format known as FTP . Short of sending your footage via snail mail or handcuffing it to someone while they hop on a plane, using the internet to store and transfer data is a common solution. The cloud offers numerous benefits. Hard Drive Life Expectancy First is what we call the “five nines”, or 99.999% availability. This means that the storage in the cloud is always available and with no errors, with a max downtime of about 5 ½ minutes a year. In the cloud, five-nines are often considered the bare minimum. Companies like Backblaze claim eleven nines . This is considerably more robust than say, that aging spinning disk you have sitting on your shelf. In fact, almost a quarter of all spinning hard drives fail in their first 4 years. Ouch. I completely get the fact that the subscription or “rental” models are a highly divisive subject, and at the end of the day, that’s what the cloud storage model is. But you can’t deny that the cost that you get to spread out over years (also known as Opex, or operating expenditure budget ) is a bit more flexible and robust than the one time buy out of storage (known as CapEx, or capital expenditure budget ). Which brings us to the next point, which is “what are the differences between the various cloud storage options?” Well, that deserves its own 5 THINGS episode, but the 2 main points to know is that the pricing model covers “availability”, or how quickly you can access the storage and read and write from it, and throughput, or how fast you can upload and download to it. Slower storage is cheaper, and normal internet upload and download speeds are in line with what the storage can provide. Fast storage, that is, storage that gives you Gigabits per second for cloud editing with high IOPs can be several hundred dollars a month per useable TB. This is why cloud storage is often used as a transfer medium, or as a backup or archive solution rather than a real-time editing platform. However, with the move to more cloud-based applications, the need for faster storage will be necessary. With private clouds and data lakes popping up all over, the cost for cloud storage will continue to drop, much like the hard drives cost per TB has dropped over the past several years. Cloud storage also has the added benefit of allowing work outside of your office and collaborating in real-time without having to be within the 4 walls of your company. Often, high-end firewalls and security, are, well, highly-priced, and your company may not have that infrastructure…or the I.T. talent to take on such an endeavor. Relying on the cloud for that security is built into your monthly price. Plus, most security breaches or hacks are due to human error or social engineering, not a fault in the security itself. Cloud storage also abstracts the physical location of your stored content from your business, making unauthorized access and physical attacks that much harder. 3. Rendering and Transcoding and VFX The next logical step in utilizing cloud resources is to offload the heavy lifting of your project that requires Flopping Power . The smart folks working in animation and VFX have been doing this for years. Rendering 100,000 frames (about an hour’s worth of material, depending on your frame rate) across hundreds or thousands of processors is gonna be finished much faster than across the handful of processors you have locally. It’s also a hellova lot cheaper to spin up machines as needed in the cloud then buying the horsepower outright for your suite. Before you begin, you need to determine what you’re creating your models in and if cloud rendering is even an option. Typical creative environments that support cloud rendering workflows include tools like 3DS Max, Maya, Houdini, among others. Next is identifying the CSP – cloud service provider –in this case, the big 3: Microsoft Azure, Amazon Web Services, or Google Cloud that supports a render farm in the cloud. Once you have your CSP selected, a user establishes a secure connection to that CSP, usually via a VPN, or virtual private network . A VPN adds an encrypted layer of security between your machine and the CSP. It allows provides a direct pipe to send and receive data to your local machines and your CSP. From here, a queuing and render management software is needed. This is what schedules the renders across multiple machines and ensures each machine is getting the data it needs to crunch in the most efficient way possible, plus recombining the rendered chunks back together. Deadline and Tractor are popular options. What this software also does is orchestrate media movement between on-premises, the storage staging area before the render, and where the rendered media ends up. Next, the render farm machines run specialized software to render your chosen sequence. This can be V-Ray , Arnold , RenderMan among many others. Once these frames are rendered and added back to the collective sequence, the file is delivered. I know, this can get daunting, which is why productions traditionally have a VFX or Animation Pipeline Developer . They devise and optimize the workflows so costs are kept down, but the deadlines are hit. This hybrid methodology obviously blends creation and artistry on-premises, with the heavy lifting done in the cloud. However, there is a more all-in-one solution, and that’s doing *everything* in the cloud. The VFX artist works with a virtual machine in the cloud, which has all of the flopping power immediately available. The application and media are directly connected to the virtual machine. Companies like BeBop Technology have been doing this with apps like Blender , Maya , 3DS Max , After Effects , and more. DISCLAIMER : I work for BeBop because I love their tech. Transcoding, on the other hand, is a much more common way of using the horsepower of the cloud. As an example, ever seen the “processing” message on YouTube? Yeah, that’s YouTube transcoding the files you’ve uploaded to various quality formats. How this can be beneficial for you are for your deliverables. In today’s VOD landscape, creating multiple formats for various outlets is commonplace. Each VOD provider has the formats they prefer and are often not shy about rejecting your file. Don’t take it personally, often their playout and delivery systems function based on the files they receive being in a particular and exact format. As an example, check out Netflix requirements. The hitch here is metadata. Just using flopping power to flip the file doesn’t deliver all of the ancillary data that more and more outlets want. This can be captioning, various languages or alt angles, descriptive text, color information and more. Metadata resides in different locations within the file, whether it be an MP4, MOV, MXF, IMF, or any other the other container formats. Many outlets also ask for specialized sidecar XML files. I cannot overstate how important this metadata mapping is, and how often this is overlooked. You may wanna check out AWS Elastic Transcoder , which makes it pretty easy to not only flip files…but also do real-time transcoding if you’re into that sorta thing. Telestream also has its Vantage software in the cloud which adds things like Quality Control and speech to text functions. There are also specialty transcoding services, like Pixel strings by Cinnafilm for those tricky frame rate conversions, high quality retiming, and creating newer formats like IMF packaging . 4. Video Editing, Audio Editing, Finishing, and Grading Audio and video editing, let alone audio mixing and video grading and finishing, are the holy grail for cloud computing in Post Production. Namely, because these processes require human interaction at every step. Add an edit, a keyframe, or a fader touch all require the user to have constant and repeatable communication with the creative tool. Cloud computing, if not done properly, can add unacceptable latency, as the user needs to wait for the keypress locally to be reflected remotely. This can be infuriating for creatives. A tenth of a second can mean the difference between creativity and…carnage. There are a few ways to tackle editing when not all of the hardware, software, or media is local to you…and sometimes you can use multiple approaches together for a hybrid approach. First, we have the private cloud, which can be your own little data center, serving up the media as live proxy streams to a remote creative with a typical editing machine. True remote editing. Next, have the all-in approach – have everything, and I mean absolutely everything – is virtualized in the cloud. The software application, the storage, and you access it all through a basic computer or what we call zero clients. Lastly, we have a hybrid approach. Serve the media up in the cloud to a watered-down web page based editor on your local machine. Each has its pros and cons. Both Avid and Adobe have had versions of an on-premises server serving up proxies to remote editing systems for many years. The on-prem server – a private cloud, for all intents and purposes – serves out proxy streams of media for use natively within an Avid Media Composer or Adobe Premiere Pro system connected remotely. Adobe called it Adobe Anywhere , and today the application is… nowhere . The expensive product was shelved after a few years. Avid, however, is still doing this today , using a mix of many Avid solutions, including the product formerly known as Interplay, now called Media Central | Cloud UX , a few add on modules, along with a Media Composer | Cloud Remote license . It’s expensive, usually over $100K. Back to Adobe, I’d be remiss if I didn’t mention 3rd party asset management systems that carry on the Adobe Anywhere approach. Solutions like VPMS Editmate from Arvato Bertelsmann , or Curator from IPV are options but are based around their enterprise asset management systems, so don’t expect the price tag to be anything but enterprise. The all-in cloud approach, meaning your NLE and all of the supporting software tools and hardware storage – are running in a VM – a virtual machine – in a nearby data center. This brings you the best of both worlds. Your local machine is simply a window into the cloud-hosted VM, which brings you all the benefits of the cloud, presented in a familiar way – a computer desktop. And you don’t have the expensive internal infrastructure to manage. This is tricky though, as creatives need low latency, and geographical distance can be challenging if not done right. A few companies are accomplishing this, however, using robust screenshare protocols and nearby data centers. Avid has Media Composer and NEXIS running on Azure and will be available with Avid’s new ”Edit on Demand” product . BeBop Technology , is accomplishing the same thing, but with dozens of editorial and VFX apps, including Avid and Premiere . Disclaimer: I still work for BeBop. Because their technology is the sh&t. Some companies have investigated a novel approach: why not let creatives work in a web browser to ensure cross-platform availability, and to work without the proprietary nature that all major NLEs inherently have? This is a gutsy approach, as most creatives prefer to work within the tools they’ve become skilled in. However, less intensive creative tasks, like string-outs or pulling selects performed by users who may not be full-time power editors is an option. Avid adds some of this functionality into their newer Editorial Management product. Another popular choice for web browser editing is Blackbird, formerly known as FORScene by Forbidden Technology . This paradigm is probably the weakest for you pro editors out there. I don’t know about you, but I want to work on the tools I’ve spent years getting better at. Alas, Mac only based apps like Final Cut Pro X are strictly local/on-premises solutions. And while there are Mac-centric data centers, often the Apple hardware ecosystem limits configuration options compared to PC counterparts. Most Mac data centers also do not have the infrastructure to provide robust screen sharing protocols to make remote-based Apple editing worthwhile. Blackmagic’s Resolve, while having remote workflows , still requires media to be located on both the local and remote systems. This effectively eliminates any performance benefits found in the cloud. 1 second in both audio and video Audio, my first love, has some way to go. While basic audio in an NLE can be accomplished with the methods I just outlined, emulating pro post audio tools can be challenging. Audio is usually measured in samples. Audio sampled at 48kHz is actually 48000 individual samples a second. Compare this to 24 to 60 frames a second for video, and you can see why precision is needed when working with audio. This is one reason the big DAW companies don’t yet sanction running their apps in the cloud. Creative work with latency by remote machines at the sample level makes this a clunky and ultimately unrewarding workflow. Pro Tool Cloud is a sorta hybrid, allowing near real-time collaboration of audio tracks and projects. However, audio processing and editing is still performed locally. On to Finishing and Color Grading in the cloud. Often these tasks take a ton of horsepower. And you’d think the cloud would be great for that! And it will be…someday. These processes usually require the high res or the source media – not proxies. This means the high res media has to be viewed by the finishing or color grade artist. These leaves us with 1 or 2 of unacceptable conditions: Cloud storage that can also play the high res content is prohibitively expensive and There isn’t a way to transmit high res media streams in real-time to be viewed and thus graded without unacceptable visual compression. But NDI you cry ! Yes, my tech lover, we’ll cover that in another episode. While remote grading with cloud media is not quite there, remote viewing is a bit more manageable. And we’ll cover that…now. 5. Review and Approve (and bonus!) Review and approve is one of the greatest achievements of the internet era for post-production. Leveraging the internet and data centers to house your latest project for feedback is now commonplace. This can be something as simple as pushing to YouTube or Vimeo or shooting someone a Dropbox link. While this has made collaboration without geographic borders possible, most solutions rely on asynchronous review and approve …that is, you push a file somewhere, someone watches it, then gives feedback. Real-time collaboration, or synchronous review and approve – meaning the creative stakeholders are all watching the same thing and at the same time, is a bit harder to do. As I mentioned earlier, real-time, high-fidelity video streaming can cause artifacts…out of sync audio, reduced frame rates, and all of this can take the user out of the moment. This is where more expensive solutions that are more in line with video conferencing surface, popular examples include Sohonet’s Clearview Flex , Streambox , or the newer Evercast solution . However, In this case, these tools are mostly using the cloud as a point to point transport mechanism, rather than leveraging the horsepower in the cloud. NDI holds a great deal of promise. As I already said, we’ll cover that in another episode. Back to the non-real time, asynchronous review and approve: The compromises with working in an asynchronous fashion are slowly being eroded away by the bells and whistles on top of the basic premise of sharing a file with someone not local to you. Frame.io is dominating in this space, with plug-ins and extensions for access from right within your NLE, a desktop app for fast media transfers, plus their web page review and approval process which is by far the best out there Wiredrive and Kollaborate are other options, also offering a web page review and approve options. I’m also a big fan of having your asset management system tied into an asynchronous review and approval process. This allows permitted folks to see even more content and have any changes or notes tracked within 1 application. Many enterprise DAMs have this functionality. A favorite of mine is CatDV who has these tools built-in, as well as Akomi by North Shore Automation , which has an even slicker implementation and has the ability to run in the cloud. As a bonus cloud tool, I’m also a big fan of Endcrawl, and online site that generates credit crawls for your projects without the traditional visual jitteriness from your NLE , and the inevitable problems of 37 credit revisions. A heartfelt thank you to everyone who reached out via text or email or shared my last personal video . It means more than you know. Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe. Thanks for watching.…
On this episode of 5 THINGS , we’re checking out the 2018 Mac Mini from Apple , and the new eGPU Pro offering from Blackmagic and …well, Apple . We’re also running benchmarks against your favorite (or least favorite) NLEs: Final Cut X , DaVinci Resolve , Adobe Premiere Pro , and Avid Media Composer . Let’s get to it. NOTE: The eGPU Pro in this episode was a pre-release, beta model. Your results may vary with the shipping version. 1. 2018 Apple Mac Mini What better machine to test how well an external GPU works than on a machine that has a horrible built-in GPU? Yes, tech friends, the powerful top of the line 2018 Mac Mini is built on Intel technology, which means there is a small GPU on the chip. While this GPU, the Intel UHD Graphics 630, isn’t going to break any performance records, it does provide a user with a basic way to feed their screen without a 3rd party graphics card. The Mac Mini also rocks USB-C for Thunderbolt 3 access. It’s this 40Gb connection that provides the bandwidth for an eGPU to shine. Slower I/O, like Thunderbolt 2 or even older connections, simply don’t provide enough bandwidth to accommodate all that a modern GPU can provide. The new Mini provides four USB-C Thunderbolt 3 ports, enough for up to 2 eGPUs. I also dig the legacy USB 3 ports, too…as we all have peripherals like keyboards and mice that still rock the legacy USB type A. Back to throughput, the 2018 Mac Mini also has an option for a 10GigE copper connection. I know, many signify 10GigE as a sign that it’s “for professionals” . Quite frankly, 10GigE is the new 1GigE, so if you haven’t been looking into it, now would be a good time. The Mini has many different options, from an entry level i3 processor to the midrange i5, to the i7 model that l I tested with. The units can be configured with 4 or 6 cores, and speeds of up to 4.6GHz in Turbo Boost Mode. For RAM, the 2018 Mini supports up to 64GB, although you may need to take it to an Apple store to get it installed. You can also build your system out with up to a 2TB PCI SSD. My testing unit was the 6 core i7, with 32GB of RAM. I ran some benchmarks on the Mini and compared the results with other Geekbench scores of other popular Apple machines. Here, we have a top of the line 2013 Mac Pro, and a maxed out 2017 MacBook Pro, plus my Hackintosh build from a few episodes back. You can see the 2018 Mac Mini is apparently no slouch when it comes to performance. Plus, it helps that the last Mac Pro is over 5 years old, too. Let that sink in for a minute. During testing, I was alerted to the fact that the Mini will throttle the chip speed when the unit hits 100 degrees (Celsius), so if you push the system too hard, the performance will suffer. That kinda blows. There are a ton of in-depth 2018 Mac Mini reviews out there, and let’s face it, you’re really here for the eGPU Pro and video post-production stuff…so let’s move onto that…now. 2. Blackmagic eGPU Pro Like the non-pro model before it, the eGPU Pro from Blackmagic simplifies the addition of enhanced graphics processing by putting a GPU in an external enclosure. Also, like the previous model, the Pro has an 8GB card in it, albeit with a faster 8GB card than the Radeon Pro 580 in the previous gen. The new GPU is the Radeon RX Vega 56. Both models are meant to capitalize on the Metal playback engine, although they will apparently utilize OpenCL or CUDA. Given the future of both of those last two standards on the Mac platform….best start getting used to Metal. The eGPU Pro also features a Display Port, unlike the non-Pro version. This allows for up to 5K monitoring if you so desire…as the HDMI 2.0 port tops out at 4K DCI at 60fps. Both units come with 4 USB 3.0 ports and a spare USB-C Thunderbolt 3 port so you can connect even more peripherals through the eGPU. Now this sucker is damn quiet. Aside from a small, white LED indicator at the bottom of the unit, and an icon in the menu bar, you wouldn’t even know the unit is running. The Blackmagic eGPU Pro is very quiet. The only physical indicator is a small white LED at the bottom. The unit is a mostly a massive heatsink, so expect heat to come pouring out of the top. The eGPU Pro ships with a 1 foot USB-C cable. This cable is VERY short, which means this unit is always going to be next to your computer. While I didn’t test it, I understand longer USB-C cables can cause issues, so stick with the cable that came with it. It’s imposing. So is the size of the Blackmagic eGPU Pro. While the eGPU Pro is cool looking, it‘s pretty hefty and does take up valuable desktop real estate. It also dwarfs gear next to it, like the Mac Mini. In fact, it’s larger than a 2013 Mac Pro, among other….things. What is uniquely different, however, is that the eGPU Pro is only certified to work with Mac OS 10.14 Mojave. I like this, as support in OS 10.13 High Sierra for eGPUs was a crapshoot at best. Interestingly enough, nothing needs to be installed in Mojave for the eGPU Pro to run – the GPU drivers are built into the OS. In fact, the eGPU Pro comes with no drivers or software to install. I tested with a pre-shipping model of the Pro –in fact, the packaging was still for the non-pro version. As such, there was no documentation or quick start guide with the unit. The ship date for the eGPU Pro has now slipped a month from November 2018 to December 2018; perhaps code is still being written for Resolve and FCPX, as I did find some issues. Let’s check that out now. 3. Apple Final Cut Pro X and Blackmagic DaVinci Resolve Please check out the video for specifics! Powergrades by Jason Bowdach to bring the Resolve system to it’s knees. 4. Adobe Premiere Pro and Avid Media Composer Please check out the video for specifics! Avid Media Composer disables all GPU effects as the Mac Mini’s onboard GPU fails to meet Media Composer’s requirements. 5. Conclusions Lots of data to crunch! It’s obvious that despite the initial high benchmarks via Geekbench, the 2018 Mac Mini, on its own, doesn’t really have a fit for creative uses in Post, unless it’s used as a file server, or perhaps a lone Mac in a sea of Windows machines when you need an easy way to create ProRes files. I concede that it may work for story producers or preditors, but I find that it’s never a good practice to hit the upper limits of a new machine from day 1. I should also note that the 2018 Mac Mini is not qualified by all NLE manufacturers . Avid hasn’t qualified it as of now. It doesn’t meet the minimum specs that Adobe publishes . The Blackmagic DaVinci Resolve Configuration Guide calls out specifically that the Mini should not be used. Apple’s requirements for Final Cut Pro X , on the other hand, are barely met by the Mini. Personally, I’d save the $2000 this machine costs and put that towards an iMac or iMac Pro, or perhaps the highly anticipated 2019 Mac Pro. Now, as for the eGPU, it’s important to remember that all of this is new. This was essentially a science experiment. Mojave, as of this episode, is only at 10.14.1. Apple, one of the partners in this eGPU collaboration, still doesn’t have all of the bugs worked out, as Compressor doesn’t appear to utilize it. That being said, speed increases in FCPX and Resolve are undeniable and certainly showcase the speed a good GPU can bring to the post table. Adobe, while not part of the Apple/Blackmagic eGPU soiree, also shows speed benefits out of the box – and that’s before Adobe has done any optimization for it. That’s pretty impressive. Avid, on the other hand, has never been a GPU powerhouse. My tests only really showed that Avid has some serious engineering to do to incorporate eGPUs into future releases. If this eGPU Pro solution could breathe life into an old system, I might be more inclined to suggest it…however, the fact that you need a relatively recent Thunderbolt 3 enabled machine makes that “old system” label inappropriate. Now, you can ‘roll your own’ eGPU, by using a 3rd party external chassis and compatible graphics cards. They can be parted out and purchased for under $800 on the street…$400 cheaper than the eGPU Pro. While I don’t expect an all in one solution to be the same cost as a DIY, a $400 delta is too big of a chunk to ignore. The only real time I can suggest a solution like this is for those folks who edit on a MacBook or a MacBook Pro. Those road warriors who are on the go and editing but need to come back home base at some point and do some serious heavy lifting. But is that market large enough? Gaming and other GPU enabled applications have a much wider reach than the Post community. Regardless, I’m excited to see if in 12-18 months NLEs have been able to start utilizing eGPUs better than they do from day 1. Have more Pro eGPU and Mac Mini questions other than just these 5? Ask me in the Comments section. Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe. Thanks for watching. Evaluation gear provided by Michael Horton / LACPUG Resolve power grades by Jason Bowdach RED Footage courtesy RED.com XAVC Footage courtesy Alex Pasquini / alexpasquini.com Cut Trance – Cephelopod by Kevin MacLeod is licensed under a Creative Commons Attribution license ( https://creativecommons.org/licenses/by/4.0/ ) Source: http://incompetech.com/music/royalty-free/index.html?isrc=USUAN1100273 Artist: http://incompetech.com/…
On this episode of 5 THINGS , I’ve got a few tricks that you may not know about that will help you upload, manage, and make YouTube do your bidding. 1. Upload Tricks You probably have media on YouTube, and you probably think that after thousands of hours, you’ve mastered the ‘Tube. But there are some little-known upload tricks and workarounds, playback shortcuts, and voodoo that you may not even know that YouTube does. Let’s start at the beginning. Let’s say you’ve got a masterpiece of a video, maybe it’s an uncle getting softballed in the crotch, maybe it’s your buddy taking a header down a flight of stairs, or maybe, just maybe it’s the cutest pet in the world. MINE. Lucy: the cutest, bestest pet in the world. …and the world needs to see it, right? And you export a totally-worth-the-hard-drive-space-huge-monster-master-file so you don’t lose 1 bit of quality. And that’s OK. Now, in actuality, it’s not the most efficient, but let’s save that existential technology discussion for another episode. Did you know that YouTube will actually take this massive file? That you don’t need to shrink it, recompress it, or otherwise use the presets in your transcoder du jour? YouTube used to publish a page that specified video file sizes and data rates for “enterprise” grade connections. Ostensibly, this was so companies with fat internet connections could upload massive files. After all, YouTube re-compresses all files anyway. Yes, as I’ve said many times before, YouTube will ALWAYS recompress your media. ALWAYS. “Enterprise” video bitrates for YouTube. These are 5-6x LARGER than what YouTube recommends for typical users . But, this page was taken down. Why? Because accepting larger files sizes ties up YouTube’s servers, and takes longer for their transcoding computers to chomp through and subsequently create the versions you’ll end up watching. Plus, it’s a crappy experience for you, the end user, to wait hours for an upload AND the processing. Despite this, you can still do it. In the above video, you can see I have an HD ProRes file, and it’s several GB. As I select the file and start the upload, YouTube tells me this will take several hours. That sucks. However, uploading a less compressed file means the versions that YouTube creates will be based on higher quality than the compressed version you’d normally export and upload from your video editor or one that you’d create from your media encoder. YouTube creates all of your media variants…so you don’t have to. So what would you rather have…a copy of a copy of your finished piece? Or just 1 copy? The less compression you use, the better looking the final video will be . More on this here: A YouTube user uploads, downloads and re-uploads the same video to YouTube 1000 times and the compression is staggering. This is called “generational loss” . By the way, you know that YouTube creates all of your various versions, right? From the file you upload, YouTube creates the 240, 360, 480, 720, 1080 and higher versions. Are yours not showing up? Have patience. YouTube takes a little bit. OK, back to the high res fat files. I know, the fact there is a long upload for these large files sucks, but it does lead me to the next tip. Did you know you can resume a broken upload ? As long as you have the same file name, you can resume an upload for 24 hours after a failed upload. In the video above, let me show you. Here’s the file I was uploading before. As you can see, it still has quite a way to go. Oops, closed my tab . Now, I’m gonna wait a little bit. OK, it’s been several hours, and I’m going to open a new tab and navigate back to the upload page. Now, I’ll reselect the same file. The upload picks up where it left off, and the file continues to upload. Great for when the internet goes out. 2. Captioning Hacks I’ve been a massive proponent for video captioning for years. Mainly for making tech information more accessible to as many people as possible, but also because it does boost the visibility of your videos via SEO , or S earch Engine Optimization . Now, engines like Google not only index the tags you assign to the video but also the objective content that’s heard in the video…what’s being said. Now, YouTube’s captioning is not foolproof. If the audio recording quality is poor, or if there are a lot of slang terms or names, the auto-captions may be wrong. However, the auto-captions are a great start, because not only do they attempt to get stuff right, but they also TIME the captions for you. After YouTube does it’s best to auto-caption, you now can either use YouTube’s built-in transcript and caption editor to fix mistakes and further stylize the timing, but you can also download that same transcript and use it in your NLE, so you can have a caption file on any other platform you like. Many other professional VOD outlets or platforms accept closed captioning as embedded captions, or even as sidecar files . This also means you can turn the text of your transcript into an accompanying blog post. In fact, that is what I do for this web series and podcast. Another hack is that if you have scripted content or a teleprompter script, you can upload that to YouTube as well. YouTube will then time the script you have against the video you’ve uploaded, and convert the script into captions…which once again, you can download and use. As a side note, I have a tutorial on how to accomplish this very thing, turning your script into timed captions using Youtube …check it out. It’s not only a massive time saver but also a way to boost your SEO ranking and give you content for a webpage. 3. Playback Shortcuts Keyboard shortcuts are your friend. The mouse is wholly inefficient and only serves to slow you down. Have no fear, the smart folks at YouTube have incorporated many of the shortcuts found in your NLE into YouTube. Let’s check some of ‘em out. Let’s start with the old reliable “J”, “K”, “L”, or rewind, play/pause, and fast forward. J = rewinds 10 seconds. K = play/pause. L = forwards 10 seconds. What about Spacebar for play/pause? You betcha. My favorites are frame by frame playback, especially when you’re trying to deconstruct that brand new movie trailer that just dropped, or trying to emulate and reverse engineer an effect from someone else’s video. For that, use the comma key to nudge 1 frame at a time in reverse. Conversely, you can hit the period key to do a 1 frame advance. You can also add the shift key to adjust the playback globally. Shift + the period key speed playback up 25%, and hit it again to boost playback to 50% faster. The reverse is also true for the shift + comma key combo. 75% speed, 50% speed, and so on. Here’s a good list for you to earn some muscle memory for. Now, if you spent any amount of time on Netflix , you’ve probably stumbled across this window, too. Or come across this substantially more involved window: A/V Stats. Advanced Netflix Stats: Display A/V Stats (Ctrl + Shift + Alt + D) YouTube has something very similar; a handy-dandy “stats for nerds” window. When playing back a YouTube video, right click on the player window. From this menu, select “Stats for Nerds” . Right click on the player window to load YouTube’s “Stats for Nerds”. An overlay window will pop up in the upper left showing you sorts of nerd goodness. This may pacify your need for numbers, but it’s also a great troubleshooting tool for playback issues. You may notice in this overlay the heading “Volume / Normalized” . This is something very sneaky which will we’ll go into….now. 4. Audio Secrets A little-known secret is that not only does YouTube always re-compresses your video, it also diddles your audio work. And I don’t mean just a simple transcode, I’m talking about adjusting your audio levels and sonic dynamics. More specifically, Loudness Equalization. You can see this in the last tip, when we checked out stats for nerds, this is what the “Volume/Normalized” heading is. YouTube analyses the audio present in your video and makes a determination if the volume at certain places – or overall – in your video needs to be adjusted for a pleasurable level of playback for the viewer. YouTube is messing with your audio! In this example, you can see the first percentage is the volume in your YouTube player window, the second percentage is how much normalization adjustment is being applied by YouTube; or how much of the volume is being reduced. The last number is the “content loudness” value, which is YouTube’s estimate of the loudness level. Now why on Earth would YouTube do this? It goes back to something I mentioned earlier, and that’s user experience. If you had to constantly change the volume on your headphones, your computer speakers, or your TV for every video you watched, you’d get pretty annoyed, right? It’s also quite dangerous to have one video that’s very quiet, so you turn the volume up, only to have your ears blown off when the next video that plays is super loud. A variant of this is actually part of the CALM Act , which holds broadcasters accountable for the same thing. You’re probably asking, “how can I stop YouTube from doing this to my mix?” Well, just like avoiding YouTube from re-compressing your video… you can’t! However, what you can do is minimize the audio tweaking YouTube does so as much of your artistic decision on loudness is retained. You can start by metering and mixing your audio using the Loudness Units relative to Full Scale , otherwise known as LUFS . YouTube doesn’t release exact values, and reports differ, but YouTube tends to shoot for -12 to -14 LUFS , so try and keep your audio in that ballpark to reduce the amount of tinkering that YouTube does. That being said, aiming for a specific integrated loudness doesn’t work, reliably. Experiment and see what sounds good to your ears. LUFS metering with Izotope’s Insight. Further reading on YouTube, audio compression, and LUFS: https://productforums.google.com/forum/#!topic/youtube/JQouU5gi1ZA http://productionadvice.co.uk/youtube-loudness/ http://productionadvice.co.uk/stats-for-nerds/ http://productionadvice.co.uk/how-loud http://swt.audio/maximising-audio-for-youtube/ 5. Advertising Tips Now, there may be a point in time when you decide to do some advertising, or perhaps make recommendations to your friends or clients on advertising. This is where Adwords can go from simple to difficult very quickly. But what if I told you about hyper-focused marketing techniques that are not too difficult? Let’s assume you know the basics of Adwords. At a very, very high level, of the kinds of AdWords you can create, are an in-stream video ad, either skippable or not, before the start of a video or, create an in-display ad, which appears as a suggested video to the right in the YouTube sidebar of the video you’re currently watching. There are more parameters to setup within Adwords, but you basically give Adwords a budget, and that budget is used by Adwords to bid on your ad placements on your behalf against other advertisers. You’re probably asking, “But how do I have my ads placed on the RIGHT videos?” Here’s where the tricks come into play. Adwords allows you to not only bid on the general types of videos you can place your ads on, like folks interested in sports , or technology , but you also have the ability to specify what exact videos and channels where your ad is shown on. Provided those videos or channels you wanna advertise on accept monetization, you can selectively choose to advertise on them. This gives you hyper-focused marketing. Adwords allows you to selectively show your video ad on a specific video OR channel. And what’s to stop you from doing a custom intro for each of the commercials you place? This allows you to speak to the exact users consuming the video and market directly to them. Think about that for a minute. You can reference the content in the exact video you’re advertising on. You can speak to the people who are watching that exact video. Instead of casting a wide net, you cast a narrow, highly targeted advertising bullet. We’re talking assassin type precision. Do you have more YouTube tips and tricks other than just these 5? Tell me about them in the Comments section. Also, please subscribe and share this tech goodness with the rest of your techie friends. They should thank you and buy you a drink. Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe. Thanks for watching.…
Have you ever thought about using a Hackintosh? Wondering how they perform? Or maybe you wanna build one? Fear not my tech friends, for in this episode, we’ve got you covered. 1. Why Build a Hackintosh? If you spend any amount of time following Apple, you’ve realized that they are a consumer technology juggernaut. Phones, tablets, watches, headphones. This has led some to speculate that Apple isn’t paying attention to the professional market . That is, Apple isn’t making computers for those who need a lot of horsepower for creative applications, and expandability to make the system more powerful than what the factory model ships with…or what Apple deems us as worthy of. We also need to look at the cost. The Apple logo carries a price premium, and without much exception, Apple computers are more expensive than their Windows or Linux counterparts. And while I concede that a ready-to-roll machine should cost more than the sum of its parts, Apple tends to inflate this more than most. Another reason to build a Hackintosh….is, well, because it’s there. Because you can. Well, physically, anyway. I’m not a lawyer, and debating the legalities of building a Hackintosh is not my idea of an afternoon well spent. However, the tech challenge in and of itself is enough for some to dig in. Lastly, owning a Hackintosh means you’ll at some point you’re gonna need troubleshoot the build due to a software update breaking things. If you don’t build it yourself, you’re not gonna know where the bodies are buried, and you’ll be relying on someone else to fix it. For all of these reasons, I rolled up my sleeves, grabbed some thermal paste, and went down the road of building my very own Hackintosh. 2. Choosing the Right Parts “Look before you leap.” When building my Hackintosh, this was my cardinal rule. See what others had done before, what hardware and software junkies had deemed as humanly possible, and follow build guides. Although I was willing to build it, I didn’t want it to be a constant source of annoyance due to glitches, and then no avenue to search for answers if things went south. Part of building a Hackintosh is being prepared for things to break with software updates – and to only update after others had found the bugs. I wanted to keep the tinkering after the build to a minimum. More createy , less fixey . The main site online for a build like this is tonyxmac86.com . The site has tons of example builds, a large community on their forums , and even better, users who have done this a lot longer than me. A great starting point is the “ Buyer’s Guide ” which has parts and pieces that lend themselves to the power than many Apple machines have. A CustoMac Mini, for example, is closely related to the horsepower and form factor you’d find with a Mac Mini . As I tend to ride computers out for awhile, I decided to build a machine with some longevity. Longevity meant building a more powerful machine, and thus as close as possible to a Mac Pro. And wouldn’t you know it, there is a section called “ CustoMac Pro ”. The downside to a machine as powerful and expandable as a CustoMac Pro is that it’s fairly large. After I took inventory of all of the expansion cards I’d want to use, I realized I didn’t need everything that a CustoMac Pro afforded me. The large motherboard in the system – known as an ATX board , was simply overkill and was too large of a footprint for my work area. I could actually go with something a little bit smaller and still have plenty of horsepower. So, I looked into the CustoMac mATX builds. M stands for Micro, and the mATX board would be similar to a full sized ATX board, but a bit smaller. I’d also lose some expandability with a smaller, micro ATX motherboard, but I could use the same processor that I would use in a full size build – in this case, a Core i7-8700K , and still get a decent amount of RAM (64GB) and have a couple of PCIe slots for a Graphics card and future a 10GigE card. I then went through the process of combing through the forums to see if there were any guides or posts pertaining to the parts outlined in the CustoMac mATX section. And wouldn’t you know it, there was an extremely in-depth post that outlined each step in detail . (Seriously, follow this guide if you get the gear below, it outlines things I have not for sake of brevity.) Next, I cross-referenced the parts listed with reviews online, and I also consulted various communities and folks off of the tonymacx86 site to get some independent opinions. This caused me to change a few things up, like getting quieter fans , a more stylish case , and a few minor tweaks. At this point, I was fairly convinced the parts and accompanying guides and forum posts were going to be enough to point the way, so I pulled the trigger and bought the parts . As the build was going to be massively based on the work that others had done before me on tonymacx86.com , I purchased the parts via the site’s referral codes. Sure, I paid a little bit more, but let’s support the community, eh? Title Comments Price Quantity Noctua Dual Tower CPU Cooler for Intel LGA 2011-0/LGA 2011-3 Square ILM/1156/1155/1150 and AMD AM2/AM2+/AM3/3+,FM1/2 NH-D9L Heatsink with Fan $54.95 1 Noctua 120mm, Anti-Stall Knobs Design,SSO2 Bearing PWM Case Cooling Fan NF-S12A PWM Fan $19.95 2 noctua NF-A8 PWM Premium 80mm PC Computer Case Fan Fan $15.95 1 EVGA GeForce GTX 1070 SC GAMING ACX 3.0 Black Edition, 8GB GDDR5, LED, DX12 OSD Support (PXOC) 08G-P4-5173-KR Graphics Card (GPU) $449.99 1 Intel Core i7-8700K Desktop Processor 6 Cores up to 4.7GHz Turbo Unlocked LGA1151 300 Series 95W BX80684i78700K Processor $349.89 1 ASUS ROG STRIX Z370-G GAMING (Wi-Fi AC) LGA1151 DDR4 DP HDMI M.2 Z370 microATX Motherboard with onboard 802.11ac WiFi, Gigabit LAN and USB 3.1 for 8th Generation Intel Core Processors Motherboard $184.99 1 Corsair RMx Series RM750 x 80 PLUS Gold Fully Modular ATX Power Supply (CP-9020179-NA) Power Supply $109.00 1 Samsung 960 PRO NVMe M.2 512GB SSD (MZ-V6P512BW) OS “drive” $298.20 1 C ORSAIR CARBIDE AIR 240 Micro-ATX and Mini-ITX Case, High-Airflow – Black Case $86.99 1 Ballistix Sport LT 64GB Kit (16GBx4) DDR4 2400 MT/s (PC4-19200) DIMM 288-Pin – BLS4K16G4D240FSB (Gray) RAM $679.56 1 Total: $2269.42 USD (not including tax and shipping) Now that the parts were ordered, it was time to prep the macOS installer. 3. Building the Hackintosh A computer won’t do much for most of us unless it has an OS. In order to get the macOS onto a non-Apple machine, we need to prep the OS appropriately. Unibeast with High Sierra Step 1 is to download Sierra or High Sierra from the Apple app store on another Apple computer. Step 2 requires us to download a Mac app called Unibeast . Unibeast will take the macOS installer, and place it on a bootable USB stick along with an app called Clover , which contains the files needed to allow the OS to install on non-Apple hardware. For Step 3 , we need to format a USB stick for the OS to be on. Make sure it’s formatted as macOS Extended (Journaled), and make sure the partition size is relatively small. Unibeast recommends a 7GB partition . Larger sizes, like newer 128GB, 256GB or even larger sticks, just won’t fly – partition them into a smaller size. I also recommend a USB 3.0 stick, it’ll make things go a little bit quicker. Launch Unibeast for Step 4 and follow the prompts to select the USB stick, as well as various options for install – such as the Clover EFI Boot type and inserting legacy graphics drivers into the install if necessary. Then, let Unibeast create your installer on your USB stick. A note about the EFI Bootloader config: When your Hackintosh boots, it looks for an EFI partition. The EFI partition contains basic system drivers and options. If your EFI folder is borked, well, so will your build. This is where Unibeast, Clover, and your hardware need to be in sync. As you’ll see later on in the video, my EFI folder during the USB stick build was no bueno, and caused a bunch of issues. Now that we have our macOS installer prepped, let’s get to the hardware build. At this point, I *highly* recommend you watch the video and/or read the guide I followed . Build notes: Install power supply first so as to thread cables appropriate through the case. Have needle nose pliers for the Noctua fan installation. Did your motherboard come with screws? Probably not. Then they should be in your case. Better check. Ensure any backplates for the processor are installed before you put the motherboard in the case. Don’t forget your heatsink spacers, too. I then went into the system BIOS as outlined by the guide , and replicated the BIOS changes. Then, reboot! 4. Installing the macOS As you have the USB installer from the last step, boot the Hackintosh from it and install as normal. Below are the issues I ran into. After the OS was installed, the system booted very slowly (over 60 seconds) and upon logging in, the GUI refreshes looked odd. I also found the system didn’t recognize the NIC. As the EFI partition usually loads these drivers, I decided to swap out the drivers located on the Boot “drive” for the drivers on the EFI partition on the USB stick. As the EFI partitions are normally hidden, I used an app called EFI Mounter 3 . This allowed me to mount the hidden EFI partitions on each volume, and subsequently copy the EFI folder from the USB stick to the boot “drive”. EFI Issue? This seemed to do the trick, as the correct nVidia (web) drivers were loading, and screen refreshes looked correct. The system also booted faster. I’m not sure what happened – it may have been a Unibeast and Clover version mismatch, or just a bad USB stick creation. For whatever reason, the system was now functioning correctly. I proceeded to then perform post-installation tweaks in the guide, including editing the config.plist so the RAM was seen properly, as well as changing the label of the Mac (for cosmetic reasons). 5. Performance and Conclusions First, we have black and white analytics. Raw horsepower. A common tool is Geekbench . Download the Geekbench app, let it run and whammo, you get performance metrics. I decided to compare my build against a top of the line Late 2017 iMac Pro, retailing for $13,199 ( thanks Lumaforge! ) as well as against a Late 2013 Mac Pro canister, with a retail cost of about $7,000 today . My build came in at just over $2500 . Here are the results of the Geekbench testing. You can see that my build beats the Mac Pro hands down, but is handily beat by the multicore performance of an iMac Pro, albeit at a price tag that’s 5x as expensive. As each of these computers were built with much different parts, a straight horsepower comparison isn’t enough. So, I also benchmarked all 3 systems with Adobe Creative Cloud (Premiere Pro and Media Encoder) , Apple (FCP X) , and Avid (Media Composer Ultimate) software. First, I did timeline render tests with Adobe Premiere Pro 2018 . I also used Adobe Media Encoder and exported to an h.264. The results are pretty much in line with the Geekbench results. The iMac Pro was the fastest performer, followed by the Hackintosh, and then the aging Mac Pro. Remember, the shorter the time, the better (faster). Premiere Pro Render Media Encoder Export Hackintosh 8:36 1:27 Late 2017 iMac Pro 6:37 1:20 Late 2013 Mac Pro 10:25 2:04 Now on to FCP X, v10.4.2 , where I expected my system to fall down due to the graphics card being nVidia as opposed to the AMD cards found inside other Apple computers. I did a timeline render benchmark, plus a Compressor encode time trial. For renders, all 3 systems were very, very fast, in fact, FCP X rendered faster than any of the other NLEs. My Hackintosh did indeed end up rendering the slowest – however all systems rendered within a few seconds of one another. FCP X Render Send To Compressor Export Hackintosh :48 1:22 Late 2017 iMac Pro :25 1:53 Late 2013 Mac Pro :35 3:19 That being said, my build exported the fastest, barely beating out the iMac Pro. Lastly, Avid Media Composer, where I tested with the 2018.5 Ultimate version . The iMac Pro came in first again, with the MacPro actually slightly beating my Hackintosh…however, all 3 systems were within seconds of one another. Export times are largely irrelevant out of Media Composer – each system exported at exactly the same time – given Media Composer’s reliance on 32 bit Quicktime 7 for exports . Media Composer Render Media Composer Export Hackintosh 1:59 3:56×2 (2-pass) Late 2017 iMac Pro :46 3:56×2 (2-pass) Late 2013 Mac Pro :56 3:56×2 (2-pass) So, “how long did it take you to build it?” The initial build took 9 hours. This includes research, the hardware build, the software build and initial software troubleshooting. However, in the month since I’ve built the machine, I’ve had to spend an additional 3 hours troubleshooting thermal issues, and buying 2 additional fans and installing them, bringing my total to a little over 12 hours. Do I have any regerts? A few. The motherboard I chose didn’t have Thunderbolt on board. Not that I have any Thunderbolt devices, but it would have been nice to have an option for the future. I also haven’t seen much performance gains between OpenCL and CUDA playback inside Adobe apps. I purposely went with a GeForce GTX 1070 SC as I expected performance gains with CUDA enabled. As other apps – namely Final Cut X, are optimized for AMD cards, I would have rather gone with a Vega card, use OpenCL inside Adobe, and still maximize the performance from other AMD enhanced apps. So, “was it worth it?” As a full-time tech nerd and part-time creative – yes. First time I’ve built a computer from scratch in over a decade, and I learned more about the underpinnings of the Mac OS than I otherwise would have. But does this cost savings outweigh the peace of mind of a fully supported, warrantied and sexy looking piece of Apple gear? For someone who is a full-time creative professional- I’m gonna say no. You need a system that works, one that you can apply updates when needed, and easily add additional hardware and software. Time is money, and the less time you can spend troubleshooting the better. Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe. Thanks for watching.…
In this episode, we’re going to dive deep into live streaming . Sure your phone can do it, but if you want a professional live stream to YouTube or Facebook or just about anywhere else, then this episode is for you. My friends at LiveU asked me to do a deep dive on their Solo streaming device. I suggested, “why not a shootout between the LiveU Solo and the Teradek VidiU Pro ?” – (which is another similar device). So, is that cool with you, LiveU? 1. What do they do? Both units have a ton of similarities; so let’s get those out of the way first so we can focus on the ooey gooey differences. At the core of both units is the ability to take an SD or HD video source, usually, HDMI, encode the signal, and stream it out to the web to sites like YouTube or Facebook. This can be via a traditional Ethernet Internet connection, going mobile with Wi-Fi, or even streaming via a USB cellular 3G/4G connection. Both The LiveU Solo and VidiU Pro have the ability to bond across a number of onboard connections, which gives you the ability to have an added level of not only redundancy but also throughput, to ensure your signal is getting out smoothly and at the highest bandwidth possible. Both units, however, charge extra for this feature, which I’ll get into that later. Both units can take AC power, run on an internal battery, are small in size, and both weigh under a pound (although the VidiU Pro is less than half of that) and they easily mount on your camera or attach to your Batman utility belt. The units also have built-in batteries so you can stream for several hours before having to re-charge. Both models also have a webpage backends to control the units…. to varying degrees. Web Interfaces for the LiveU Solo (left) and Teradek VidiU Pro (right) The Solo and VidiU Pro both do their jobs very well. But the devil is certainly in the details, and that’s what we’re going to tackle next. 2. Setup and Connectivity The LiveU and Teradek units both allow setup via a web browser on your computer or mobile device. LiveU, via the LiveU Portal and Teradek via the webserver on the unit. LiveU is pretty straightforward. Create an account at http://solo.liveu.tv/ , boot up your LiveU Solo unit (although this may take a while), and connect your Solo to the internet. Then, enter in your Solo serial number into the portal, and you’re off and running. You now can do your entire streaming configuration of the Solo signal on the webpage for your unit. Now, this is a blessing and a curse…which I’ll get into a bit later on. Teradek’s setup is very similar. Boot up the unit (which doesn’t take as long as LiveU), and get your unit on your local network – it doesn’t even have to access the internet at this point. It just needs an IP address so your local computers or Wi-Fi enabled cellular devices can see it. Unlike the Solo, the VidiU Pro has a web server built-in, which makes configuration a bit easier. Why is that Michael? Having a local webserver built-in means I don’t need to have Internet connectivity on my laptop or workstation if I’m out in the field and want to configure the unit. Yes, I know, you can use cellular devices like your phone to configure their unit, but if cellular service on location is lacking for your phone’s carrier, now what? It’s a slight issue, but I think having a web server built into the unit as Teradek does makes for an easier way of local configuration. Speaking of cellular connections, at this point, if you’re planning on going into the great wide open and streaming via 4G connections, make sure you’ve purchased your cellular modems and chosen the right data plan – one that won’t throttle your bandwidth or cut you off at the knees during an event. Now it’s time for the speeds and feeds and goes in and goes out part of our review. For that, LiveU has the upper hand. With 2 USB ports, the Solo offers an extra layer of USB connectivity. Normally this would be used for 4G cellular modems , which means you can bond across 4G connections and across multiple carriers for a smoother streaming experience. The LiveU Solo has 2 USB ports for 3G/4G cellular modems, compared to the Teradek VidiU Pro’s 1 USB port. Plus, while the Solo itself is HDMI, LiveU does offer an SDI model for a bit of a bump in price…. so those of you with pro cameras don’t need to have another converter with you in your bag of tricks. What Teradek VidiU Pro lacks in a 2nd USB connection it kind of makes up for with onboard recording. The Teradek unit can handle an SD or SDHC card so you can record a local copy of your incoming signal. This comes in handy if you plan to do a cut down later. Unlike the LiveU Solo, the Teradek VidiU Pro has onboard recording onto an SD or SDHC card (class 10 or faster) There is also a major difference in the way audio is handled. The VidiU Pro has a 1/8” audio input – maybe you’re pumping in a mic or music from another source. There is also an 1/8” jack for headphones as well. The Solo also has only one 1/8” jack – and right now, it’s inactive, designed for future expansion. 3. Flexibility and Usability This is where comparisons get tricky because this is where we begin to see the sweet spots for each unit. LiveU has many more built-in CDNs presets – CDNs , or content delivery networks –are the endpoints you would broadcast to in order to make your content viewable by everyone in the world, like YouTube or Facebook. The Solo has twice the amount of optimized and tweaked presets out of the box than the VidiU Pro. However, both units do allow for generic RTMP, or Real Time Messaging Protocol – which is a rather generic way of streaming online) inputs so you can manually create presets for just about any online outlet with a minimal headache. Both devices have configuration panels on the unit via joysticks, but far and away, the Teradek unit has more editable info available via the front panel than the Solo does. I really like the small color preview window on the LiveU Solo, however. The preview window on the Solo gives me a bit more confidence that I’m sending out the right signal….and if it’s right from the camera, I can verify there are none of those pesky overlays on the image! The flip side is that I can’t test the Solo’s streaming functionality without a valid input signal. This makes the Solo a bit difficult to test if the other parts of your streaming rig, like the camera or the switcher, are not already set up. I’d love a built-in bars and tone generator with the streaming specs overlaid over the signal for this purpose. Now, when streaming, battery life is of concern. The Solo boasts a 2-hour battery , while the Teradek boasts 3 hours. Teradek VidiU Pro I tested this out by setting both units to their max bitrate and resolution, and streamed over Wi-Fi to the same CDN until their batteries died. My testing showed that the VidiU Pro was nowhere close to the proclaimed 3 hours of battery life, failing at 108 minutes – not even making it to two hours. To be fair, the VidiU Pro results may be slightly skewed as the unit inexplicably stopped streaming 4 times during the test for no apparent reason, which I had to then restart its stream. By comparison, the LiveU had more than 50% of its battery remaining when the VidiUPro died, and continued streaming for another 100 minutes after….with no drop-outs. The LiveU Solo has 2x as much battery life as the Teradek VidiU Pro. 4. Streaming Quality The LiveU Solo and Teradek VidiU Pro both stream in an h.264 format, and deliver excellent visual quality at their respective data rates. (Editor’s Note: I was going to put side by side samples in the video, but after editing and re-compression by YouTube once uploaded, the differences were barely noticeable and subject to interpretation.) I did find I had more flexibility with the LiveU as I can override the units presets and manually specify the data rate; the VidiU Pro forced me into one of several presets. The Solo also gives me a max data rate of 5.5Mbits, whereas the VidiU Pro tops out at 5Mbps. Quite honestly, I don’t see most folks noticing this visually unless you’re really pushing the limits of the 1080p60. Another point on 1080p60: This is VERY important for sports, both live and eGaming based , and other content where you want the largest frame size and frame rate possible. The VidiU Pro tops out at 1080p30, and only reaches 60 frames at a 720p resolution. If your endpoint is demanding 1080p60, then LiveU is gonna be your first choice. The Solo also encodes in an h.264 High profile, while the VidiU Pro simply supports Main profile. This means more efficient compression that should manifest in better visual quality at lower data rates . It’s also the profile that’s traditionally used for larger screen HD monitors while Main profile is generally reserved for smaller screens and less powerful devices, like tablets and smartphones. LiveU takes a staggered approach to streaming: the data rate starts low and climbs to the intended (or available) data rate, usually within 60 seconds or so. This can be problematic for those who are running and gunning and need to go live instantaneously. I see it as a safeguard to have a stream start smoothly while buffering. You can certainly work around this by starting a stream, but not going live on your CDN until you hit your desired data rate. VidiU Pro, by comparison, starts at the desired bitrate as soon as you start streaming. LiveU also a cool tech called LRT, or LiveU Reliable Transport– a tech that they actually hold the patent on. This includes forward error-correcting, packet ordering (which is mandatory when bonding across multiple signals), and adaptive bitrate encoding. It’s the adaptive part I like because the audio is always made to maintain a higher priority of quality to the video, so even if the bandwidth drops, the audio quality will remain good. Keep in mind, utilizing LRT on the Solo is an extra $45/month charge . But then again, Sharelink from Teradek , which also provides bonding across connection methods, is also an upcharge per month, depending on your bandwidth needs. Live U LRT Monthly Cost Teradek Sharelink Monthly Cost Now, in terms of streaming reliability during my testing, the VidiU Pro stopped streaming for unknown reasons a handful of times, four in fact, during a nearly two-hour stream test – while the Solo never stopped streaming until I told the unit to do so. The Teradek VidiU Pro stream was interrupted multiple times for no apparent reason, while the LiveU next to it ran without any issue. I was able to replicate this behavior during other shorter tests as well. I wish when the VidiU Pro ran into a streaming issue, it would try and correct itself…better? Whenever the stream was interrupted, I had to manually restart the stream on the Teradek unit. 5. Conclusions It’s obvious both of these systems will serve you well. However, with the extra USB port for a cellular modem, the granular control over streaming bitrates and h.264 high profile setting, the larger choice of built-in CDNs, the awesome LRT tech, fantastic battery life, and not one interrupted stream during my testing, the Solo is the best choice when streaming is your number one concern. It’s simply meant for mission-critical streaming in broadcast situations. If I had to choose one portable unit to encode and stay on the air, it would be the Solo. Now, the Teradek VidiU Pro unit is part of a larger, much broader Teradek ecosystem which includes many of their other hardware apps, and their software solution bonding – Sharelink. More web portal control, the ability to record onboard the unit, and make changes from the device make the VidiUPro a great Swiss army knife. But the shortened battery life and interruptions during streaming make the VidiU Pro seem to be a jack of all trades, but a master of none. Have more questions about the LiveU Solo vs the VidiU Pro other than just these 5? Ask me in the Comments section. Also, please subscribe and share this tech goodness with the rest of your techie friends. You can also stalk me online if you’d like, too, I’m easy to find. My thanks to LiveU for their financial support and sponsorship of this episode. Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe. Thanks for watching.…
1. What ways are there to create a Roku channel? When I cut the cord a few years ago, I still needed to find a way to enjoy video media without simply relying on Netflix and YouTube . And buying many of the other subscription-based apps, like HBO GO , would simply jack up my monthly charge…which is why I canceled cable in the first place! I looked at all of the OTT options out there, including AppleTV , FireTV , and various gaming consoles. Roku caught my eye as an inexpensive way to find and consume content, as it had thousands of public and private channels – much more than most others, and already a wide viewership. In fact, as of June this year, nearly 40 million Americans used their Roku at least once a month . It’s the #1 OTT platform out there. Roku has also just gone public, which means the company is going to accelerate even more quickly. While I was thrilled with the amount of content I had easy access to with Roku, it wasn’t long before I wanted to get on the bandwagon and create my own channel for 5 THINGS . The Roku channel paradigm is pretty simple. A channel is much like an EDL – it simply points to where the media is externally and streams it when told to play, so channels are traditionally lightweight, as they only contain few images, and code to link to your media elsewhere. This seemed simple enough. However, when I started to create my own channel, there were only 2 ways to do it yourself. First, was to use the site like instanttvchannel.com , which is a Cloud-Based Roku Channel Production System. Through a series of dropdowns and prompts, you could create your very own Roku channel. The problem I had was you had to pay per month as long as the channel was active, anywhere from $5 to $50 per month, and your account may have an InstantTV splash screen on launch, amongst other gotchas. instanttvchannel.com The second way was to develop against the Roku software development kit, or SDK . Programming for a Roku is channel is based around the BrightScript coding language, which is a bit like Javascript mixed with visual basic. Now, I could learn Brightscript and code from scratch, or, I could take some basic templates already configured for base levels of functionality and augment them as needed with modules of code. So that’s what I did. It still relied on some coding and lots and lots of testing, and I spent a ton of time on forums and help guides. But it was the best choice since I wasn’t going to develop Roku channels full time. It took about 100 hours of work, and the purchase of an old SD Roku model for testing on legacy gear to get my first channel up and running. And then there were the updates. Several hours per month, after each new episode, and creating and testing versions for SD and HD Roku models. But you fine folks are in luck. Last year, Roku introduced a 3rd way, known as “ Direct Publisher ”, which gave novice creators the ability to create a channel without writing a single line of code and have it work across most any modern Roku player. Hallelujah! And thus, how to create a channel with Direct Publisher is what I’ll show you today. 2. How do I use Roku’s Direct Publisher (walkthrough & tutorial) Since Direct Publisher is from Roku, you need to start off by creating an account with Roku. If you already have a Roku unit, then you probably already have an account. Sign in. Navigate to the developer homepage, and under “My Channels”, you’ll want to click “Manage My Channels”. Click “Add Channel”. As you can see, we have the aforementioned Developer SDK and Direct Publisher methods. You’ll want to select Direct Publisher, and give it a descriptive name, and click “Continue”. Next, you’ll be prompted for a few various options. Where in the world your channel will be available, what language the channel is in, if your channel is intended for children or adults, and a channel vanity code. The vanity code you can pass to potential subscribers so they don’t need to search the store for your channel. Give them the vanity code, and they can enter it into their account to immediately get your channel. Next, you’re presented with your Feed URL. This is where you may want to pause and watch the rest of this episode because we’ll need to determine your Feed URL via other methods. Jump to “ #4: How do I update my channel” for how to determine this, and them come back. Now that we’re back, I’m going to cut and paste my Feed URL, and now tell Roku what format my videos are in. I’ll explore the various media types later in this episode. However, I’m going to select “Specified in feed” so I can dynamically change the media type at any point in time. Next, we’re at the branding page, and this is where you’ll upload custom graphics for your channel. It’s important to adhere to the graphic specs, or the page layout won’t look right. Roku will automagically make all of your graphics work on SD, HD, and UHD models of Roku, so don’t worry about creating various versions and resolutions. You also can choose your branding colors to complement your style. Page Layout: One thing to remember is that while Direct Publisher makes things easy to get started, it does somewhat limit the layout of your channel. That’s the tradeoff. If you want total control over your layout, then you’ll need to build your channel from the Roku SDK. Next, we need to define categories. Since we’re going to define the categories in our feed, we’ll select “from feed”. Now, we need to enter in our all-important metadata. Channel name, It’s Description and Web Description, your channels’ category, and the uber-important keywords. Plus you’ll add a channel poster image. On the next screen, are ya making money off of this thing? Most likely when you’re starting out, the answer is no, so select as such. Next, upload a screenshot. Roku can autogenerate this if your feed URL is ready, if not, upload a 1920×1080 JPG or PNG file. We’re now at Support Information, where you need to enter how Roku can get in touch with you, but also where do you want to drive viewers to get in contact with you or see more about your channel and your projects. Click Continue, and you’ll see a summary of your channel. There are icons next to each section you’ve filled out, so you can see where you may have goofed. Often you’ll see problems next to Feed Status. Not all errors are show stoppers, and many of the errors you may encounter are easily found in the Direct Publisher Roku Forum. Once things are good, click the link towards the top of the page so the channel gets published to YOUR account. This is NOT public just yet, it’s just pushed to your Roku account so you can beta test. Add the channel. Now, move to your Roku unit, and you’ll want to navigate to Settings – System – System Update. This will have Roku not only look for updates to your software online but also add any channels. Once the Roku unit is updated, you can open the channel and start testing. BOOM, you now have the framework for your first Roku Channel. Now, we need to add some content. 3. What kind of media do I need to create for Roku? If you’re going to promote yourself on Roku, ya need content right? One immediate issue that many novice developers encounter is that any Roku channel you create cannot link to YouTube videos, which, of course, is a popular place to house your media. Doing so violates the YouTube terms of service , and Roku has been cracking down on channels doing these sorts of things..…plus, YouTube doesn’t offer direct links publically to their media anyway. You can, however, use a Vimeo Pro Account, as they offer up direct MP4 links to your media . That being said, using a single, self-contained MP4, M4V or MOV for that matter, can be problematic. It’s very difficult to create a single file that works well for streaming to every device, given bandwidth and resolution. Now, more advanced streaming formats that include segmented files at various quality levels, otherwise known as adaptive streaming, are preferred over standalone files. The end player can decide what version will play smoothly, given the available bandwidth at any point in time, so you get the highest quality file possible with no buffering errors. Microsoft Smooth Streaming , MPEG-DASH , and Apple HLS all follow this basic methodology. Of these 3 options that Roku supports, I chose Apple HLS. Mainly due to the fact a majority of my audience uses Apple devices, and HLS is fully supported on iOS devices . You need to host this on a cloud provider or on your website. I choose to use my existing Amazon S3 bucket, but as you’ll see later, you may want to host it on your own website. As a side note, be sure to compare the cost of CDN hosting on Amazon S3 vs how much traffic you can have on your website. Some web-hosts may bill you for traffic overages, and hosting all of your media on your website may push you over the edge. This is why I use Amazon S3. If you hadn’t noticed, I caption all of my episodes. Roku accepts SRT caption files, as well as media with embedded captions. I use SRT as my main format and convert them into a VTT format, and I’ll explain why in a few minutes. I’m also a big fan of BIF files or Base Index Frame files . These are graphic thumbnails that the user sees while rewinding and fast-forwarding through each episode. It’s a visual indicator of where the user is, not just a timestamp. I use BIF Video File Creator , now free, to create the BIF files from my final video edit. I also upload these to my Amazon S3 account. Lastly, I create a thumbnail of my episode, much like you would do for YouTube. In fact, I now use the exact same thumbnail I create for YouTube, as Direct Publisher will automatically size it for older SD players – something that the old SDK method couldn’t do. 4. How do I update my Roku channel? (walkthrough & tutorial) Roku’s Direct Publisher takes updates in 2 ways: via an MRSS feed , or via JSON . Now, you’re probably saying, “Michael! You said there was no coding!” There isn’t! Many online apps or website templates can generate MRSS feeds of a blog post. However, JSON offers a deeper level of integration…and there are apps and website plugins that communicate to the Roku mothership via JSON. Now, if you use WordPress as your website development environment, then you’re aware of all of the plugins to give your site more functionality. Recently, a plugin became available called WP Smart TV by Rovid-X Media . This plugin can use your WordPress site to house all of your media for your Roku channel and can push all of the updates to your Roku channel via JSON. No coding! For me, this was an instant workflow savior. It saved me hours of time per month in coding, testing, and manually creating artwork; plus it centralized my data and media. Let me show you how it works: Log into your WordPress Installation, click on plug-ins, and “add new”. In the Keyword search box, type “WP Smart TV”. Once the plugin shows up, click “Install Now”. For this demo, I am using v1.3.1 of WP Smart TV. Once installed, click “Activate”. You may also want to install the Jetpack plugin , which is a plugin that speeds up image loading. It also can fix the error message on the Roku developer portal where Roku cannot load thumbnails. Some WordPress hosts block loading images to Roku, and this gets around it. On the left-hand side of your screen, you’ll now see a heading for “WP Smart TV”. Click it. You now will see the options for your Roku JSON feed. Under General Settings, you can select your type of posts you’re making, that best coincide with your content. Click Save. Under “Roku Settings”, You have options to limit the amount of posts that show up on the channel per category, as well as settings for any advertisements you may have. I highly recommend reading the documentation on Recipes, so you can have more granular control over the order in which your categories show up on the channel. Note the URL at the top of this section. THIS IS IMPORTANT. This is the Feed URL that you will cut and paste into the Feed URL prompt we covered in the last step. This is how WordPress, WP Smart TV, and your Roku channel talk to one another. Click Save. WP Smart TV also supports other OTT devices like FireTV, for Roku we can skip over this. We can also skip over the VideoJS settings as we are not monetizing this channel in this demo. Lastly is the HELP documents tab, which I highly recommend reading. In order for WP Smart TV to add content to your feed, you’ll need to create a Video Post. On the left side of the screen is a heading called “My Videos”. Click it. From here, you’ll want to add a new video. Immediately, you’ll see the options for your Roku Media. You’ll need a new video post for every video file you want on your feed. I know, if you have a large catalog of videos, this may be tedious… but it’s much better than coding. Put in your duration, your video format, your video quality, and the location of the media. As we talked about in the “How do I create Media for Roku” section, add in the media URL here. Add Closed captions if you have them, and their language and type. Currently, VTT is accepted, so I convert my master SRT files to VTT with any number of free web-based tools. The captioning tab is optional and can be left blank. Trick Play is used for the BIF files, which we covered earlier. Now, click Genres to further categorize your media, and use the custom fields for advanced control if you developed against the SDK. Be sure to upload a Featured Image for your post, as this is the thumbnail for the post. Also, enter a description at the top so viewers know what the episode is about. Click Publish. Congratulations , you now have your first video for your Roku channel. Every time you make a change to your Roku channel, you’ll need to go back to your Roku Developer page and refresh the feed. This is only temporary. Once your channel is published and is made public, the updating will occur automatically. 5. What else do I need to know? First, ya need to buy a Roku player. You gotta know how the thing actually works, so you know how to design your channel. You also need to test your channel. As of now, there isn’t a way to 100% test your channel via SDK, via a 3rd party website, or by Direct Publisher without one. So check out the current models, and see which one works for you. Also, this ain’t “Field of Dreams”. Just because you build it, doesn’t mean people will flock to it. So, you need to promote it. This includes Roku themed channel sites, as well as on Roku user forums, plus on your other sites, like YouTube, Vimeo, and other social media channels. Now, as mentioned earlier, Direct Publisher does have a framework for ads and thus ad revenue, so you can conceivably make money off of your channel. But if you’re not making much on YouTube, do you really expect to make more on Roku? Roku is big, but not YouTube big. To be clear, I don’t recommend it as your sole point of distribution. But in the huge marketplace reality where there is competition for eyeballs everywhere, branching out into a realm that has less than, let’s say, YouTube, means you have another audience to engage with. And 40 million eyeballs per month ain’t bad. Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe. Thanks for watching.…
1. What does the Helo do? The AJA Helo , in a nutshell, does 2 distinct things: it records an SD or HD video signal and also streams a signal to just about any streaming service you can find. Pretty simple, right? But before I get too much deeper, let’s take a quick look at where the Helo may work for you in a production environment. Here we see a typical small production setup. We have our cameras…and we have our capture device that takes in the cameras and other video sources….next we have the switcher which cuts between angles, adds graphics, and mixes audio…..and now we have our streaming device to get out to the internet, and lastly, the web host or CDN that takes our signal and shares it with the world. A traditional live production workflow. The Helo is that streaming device, and as an added bonus can also be a recording device. Let’s take a look at how the Helo does this. First, the unit has 2 separate encoders. This allows the Helo to encode the incoming signal in two discreet ways – one as a local copy, and one in your streaming format du jour. This is pretty cool because it means you’re not stuck with an archival version that was optimized for live streaming – it can be recorded at a higher frame rate and frame size so you can edit it after the event. This recorded copy can either be written to a local USB drive or RAID, to a network mount point, or even to an SD card. As the encoder is limited to h.264, and tops out at 20Mbps, all of these onboard storage methods provide plenty of bandwidth with which to record the incoming signal. 20Mbps is pretty cool; as it’s a higher data rate than most other portable encoders out there. The rear of the Helo, with both SDI and HDMI sources (only 1 can be used at a time) Speaking of the incoming signal, the Helo has an SDI and an HDMI spigot, however, only 1 can be used at a time. Aside from embedded audio on these inputs, there is also an unbalanced 1/8” jack if you’re into the second system audio kinda thing. Now, the Helo handles signals up to 2K at 60 frames, however, it tops out recording and streaming at 1080p60. 1080p60 – now, this is really important. 1080p60 is often requested for sporting events and gamers where higher frame rates and frame sizes are needed. Many other portable encoders on the market will top out at 720p60, or at 1080 with slower frame rates. The Helo has the ability to stream to virtually any of the platforms you’d want to stream to via RTMP and RTSP. RTMP will cover a vast majority of the popular CDNs, including YouTube, Facebook, and Twitch. What the Helo does lack is out of the box are pre-baked CDN presets. However, you can easily create those manually inside the box for re-use. Thus, you do need to understand what the CDN you’re using is looking for. Luckily, a simple Google search for the RTMP settings for the popular platform you’d like are usually widely published. 2. How do I use the Helo? Download the latest firmware and utilities for your Helo at https://www.aja.com/products/helo#support A few things you need to do, handy person. As the unit is new, firmware updates have been coming frequently. So, go to the AJA website and download the latest firmware. With this being the IBC 2017 season, look forward to new firmware announcements. Also, you’ll want to download the AJA eMini Setup Utility and install it. This will allow you to update the unit’s firmware to get the latest features and bug fixes. Now, let’s get the unit some juice with the included power supply, and connect the supplied USB cable to the unit; and plug the other end into your computer. Launch the Setup utility on your computer and update the unit’s firmware. OK, now we need to get a signal to the unit via HDMI or SDI. If using this for streaming, this will come from your switcher’s output, or even the output of your camera if your production is more of a “set it and forget” type of affair. Don’t forget your separate audio line if your audio is coming in from somewhere else. Now, the unit also has an HDMI, SDI and Audio output as well, so you could route these outputs to a monitor and mixer to check things, or, place the Helo in front of your switcher in order to have a clean recording from your camera– also called an ISO of your signal; albeit in an h.264 format. On the rear of the unit is the LAN port, where you’ll want to run an Ethernet cable to the Helo and out to your router or switch so you can do the initial setup and control the Helo. There is no WiFi on the unit, or a way to use a portable USB LTE modem, so you’ll need to ensure you have a hardwired line. Back in the eMini Setup, there is a button to launch the web page that controls the unit. Your unit should be set to DHCP, so it should grab an IP address from your network when you plug it in. You should probably at this point decide that if you are going to record the incoming signal, where is that recording going to land? If it’s on a USB drive, that drive should be USB 2.0 or 3.0, and be formatted as ExFAT or FAT32.These are cross-platform, folks, not Apple HFS or Windows NTFS. This can be a RAID as well. If you’re going to an SD Card, make sure those are only formatted as ExFat. Lastly, if you’re going to a network mount point, ensure you’re using NFS or CIF protocols. 3. How do I control the Helo? Front buttons on the Helo to trigger recording or streaming independently We’ve got two ways to control the unit. We’ve got end-user controls and then the way * I* like. The end-user control consists of physical buttons on the Helo for recording and streaming, so you can engage or disengage each of the functions independently of one another. Very simple, and very handy if nontechnical folks are going out to record or stream an event. And now the way I LIKE to handle things – my way. The Helo gets setup and can be controlled via your web browser; this is actually where you set up the parameters of the end-user controls. Now, while the web browser access makes things easy and AJA gets points for functionality, their graphical interfaces have never been the most eye-catching. Function over form . If you’ve ever used other AJA converters and recorders, like the FS series or Ki Pro Family, then the interface will be very familiar. It’s from here you can monitor the status of the unit, and see a summary of the parameters that have been selected and saved on the unit. At the top of every page is a current graphical representation of the recording and streaming status of the unit, and you can use the mini icons here to bounce back and forth between the recording and streaming functions. AJA Helo web interface shows the current status of the unit. If I haven’t beat this into your head already, this is also where you can independently alter the recording and streaming parameters of the Helo, and then save them, which AJA calls P rofiles. Alarms on the Helo This is where things can get a little confusing. The Helo has Profiles for recording and streaming – 10 profiles for each. However, these profiles are not the same as Presets , which are system-wide groups of settings that you can save to the Helo. For example, you may have a recording profile for 720p, and a streaming profile for 1080p. These are controlled independently, but a preset would configure both streaming and recording profiles in 1 setting – a preset. Through this methodology, you could mix and match profiles to create specific presets. Perhaps you could use this for various physical streaming locations, coupled with various camera gear and the available bandwidth at that location. Of the myriad of controls for each encoding parameter, there are some of note. You can change the level of your h.264 encode, from constrained, to baseline, to main, and to high. You can also de-interlace if the unit is getting an interlaced video signal, Plus, you can set the naming convention of your recorded clips, and you have the option of MOV or MP4 wrapped h.264 clips or simple TS streams. **While not mentioned in the video, the video quality slider (bitrate) displays both a green and red area. The green area is the recommended encoding bitrate(s). Red areas are not recommended bitrates, either because they are too low or too high. If you attempt to set the recording bitrate outside of the area, the Helo will override your input bitrate and instead move it into the green area. This is the Helo adhering to the Kush Gauge . Unfortunately, this happens behind the scenes (as of 2.0 beta firmware). So, keep that recording bitrate slider in the green. AJA Helo Video Bit Rate – stay within the green area! What I dig is the easy “oh &$*# what’s wrong menu” on the left-hand side, where you get immediately notified of any errors that may be happening. Lastly, if you don’t want to use the hard buttons on the front, or use the webpage, Helo can take a calendar entry, like a Google calendar appointment or an ICS file to automatically trigger recording or streaming at a predefined date and time. Very simple. 4. How is the quality? I’m very impressed with the image. More so with the streaming than recording, but I’ll get into that in a minute. Often when you compress a video signal down, one of the first things to be lost are the details in the blacks. I find that if you keep your data rate high enough while still adhering to the specs of the CDN you’re using, the details in the darks translate very well from the Helo. I did over 3 dozen recording and streaming tests, encompassing various record times and at multiple recording and streaming bitrates, with no hiccups attributed to the unit. I did find with very early firmware issues that I would sometimes get a frame or 2 video hit while streaming on YouTube, but that seems to have been rectified with newer firmware versions. This does bring up an important point: and that is, always do a test from the location you’re streaming from before the day of the event . Congested networks and limited upload speeds can cause you to drop frames while streaming. Just because the unit can stream at 20Mbps, doesn’t mean you should do it. As far as recording, I’m not a fan of recording into h.264. I figure if I’m going to record an ISO, or record the output of the switcher, I want something that’s high enough quality that I can manipulate the image in post, whether that be a color pass, or maybe even a re-edit…or even for archival purposes. H.264’s tend to fall apart after subsequent encodes. On the flip side, if you don’t plan on doing any manipulation of the recorded video, you now have a great looking recorded h.264 that won’t take forever to upload for VOD later. I’ve posted some examples of how the recorded image looks immediately below (click on the resolutions below in the chart). You can download ‘em and see how they look. Source 1080p23.98 (ProRes 422 HQ) Helo Record Frame Size and Frame Rate 1080p23.98 720p23.98 Type of Recording Color and Resolution Motion Color and resolution Motion 1000Kbps 1080p2398 to 1080p2398 1000Kbps 1080p2398 to 720p2398 1000Kbps 2500Kbps 1080p2398 to 1080p2398 5000Kbps 1080p2398 to 1080p2398 2500Kbps 1080p2398 to 720p2398 2500Kbps 5000Kbps 1080p2398 to 1080p2398 10000Kbps 1080p2398 to 1080p2398 5000Kbps 1080p2398 to 720p2398 5000Kbps 1080p2398 to 720p2398 5000Kbps 6000 – 10000Kbps 1080p2398 to 1080p2398 10000Kbps 1080p2398 to 720p2398 10000Kbps 1080p2398 to 720p2398 6000Kbps 11000Kbps+ 1080p2398 to 1080p2398 15000Kbps 1080p2398 to 1080p2398 14000Kbps Source 1080p29.97 (ProRes 422 HQ) Helo Record Frame Size and Frame Rate 1080p29.97 720p29.97 Type of Recording Color and Resolution Motion Color and Resolution Motion 1000Kbps 1080p2997 to 1080p2997 1000Kbps 1080p2997 to 720p2997 1000Kbps 2500Kbps 1080p2997 to 1080p2997 2500Kbps 1080p2997 to 720p2997 2500Kbps 5000Kbps 1080p2997 to 1080p2997 5000Kbps 1080p2997 to 1080p2997 5000Kbps 1080p2997 to 720p2997 5000Kbps 1080p2997 to 720p2997 5000Kbps 6000 – 10000Kbps 1080p2997 to 1080p2997 10000Kbps 1080p2997 to 720p2997 7000Kbps 1080p2997 to 720p2997 7000Kbps 11000Kbps+ 1080p2997 to 1080p2997 15000Kbps 1080p2997 to 1080p2997 20000Kbps 1080p2997 to 1080p2997 15000Kbps 1080p2997 to 1080p2997 20000Kbps Keep in mind, when you stream to a CDN, that is, after the Helo has encoded the video, the CDN may transcode your media further, so your image may take another hit in quality. Thus, it’s best practice to always test what the end signal looks like before going live.As a side note, and while it’s not aesthetic, the unit will get hot while recording and streaming. There is no fan, so the unit dissipates heat through the chassis….don’t freak out. 5. Who should use the Helo? Portable switching devices like the TriCaster by Newtek are fantastic. But in addition to the switching, audio mixing, and graphics capabilities, they often are handling recording and streaming as well. This is a lot to burden 1 machine with. I like to spread out the points of failure if possible, and the Helo helps with that. Newtek’s TriCaster Mini The Helo is also a hardware encoder, rather than the software encoder that portable switchers or software switchers use, and hardware encoders traditionally mean better visual quality – and less potential for hiccups due to a computer system and it’s OS and everything else that the computer is doing. The small footprint of the unit also makes it portable and lightweight for the aforementioned remote broadcasts. For sporting events and gamers, the larger frame size of 1080 and the progressive 60 frames ensure you’re getting as much info per frame as possible without compromising. Lastly, and something many users overlook is the value of support. For years, AJA has had world-class support, and knowledge of not just their products, but how their product fits into typical workflows. It’s important to have that backup behind you when you jump into new gear. It’s also great for folks who handle events that need to be streamed but may not have uber technical people at the event. Churches and schools are a great example. The hard buttons on the front make it totally simple to operate. Have more workflow questions on the web series? Maybe some improvements? Ask me in the Comments section. Also, please subscribe and share this tech goodness with the rest of your techie friends. Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe. Learn more about the AJA Helo: https://www.aja.com/products/helo…
1. YouTube needs compressed video Now, if you came to my live show, you may have already seen this. Did you know that you don’t have to upload a compressed version of your project to YouTube? You can actually upload much larger files like ProRes and DNx and YouTube will accept them. Yeah! Of course, who would want to actually do this? Well, I do, mainly because I’m vain and… Let’s back up. YouTube does have their “ recommended upload encoding settings ”. And your NLE and transcoder probably have a YouTube preset as well. Both of these typically recommend an h.264 . However, if you shot on a higher end camera, h.264 is a huge hit in quality from the original file. Plus, no matter what you upload to YouTube, YouTube will recompress it again . Let me repeat, WHATEVER YOU UPLOAD TO YOUTUBE, YOUTUBE WILL RECOMPRESS IT . There is no way around it. Google Datacenter These presets exist because not only do they make your upload time shorter, but it also taxes the transcoders over at YouTube less. Think about it: YouTube has over 86,000 hours of video uploaded daily, and all of those files need to be transcoded to the various flavors YouTube can display, so they want to maximize their computing and storage resources. It’s a win-win…you get a quicker upload, and YouTube can turn around the file faster. …Unfortunately, this means you take a hit in quality because you’re dumbing down your high-resolution file down to h.264. So, if you have the bandwidth, and you have the time – in some cases a 10x increase in time – upload your ProRes or other high res masters. You will see a quality increase, and you can thank me later. 2. Bare drives are safe for storage FALSE! Those caddies for drives were made for technicians to easily mount drives and to clone and dupe several drives at a time. However, I’m seeing them pop up more and more on editor’s desks. Ways to destroy drives This is crazy! Bare drives are not floppy disks. You can destroy the drives in many ways. You can shock it, drop it, get it dirty, and wear the connector. The first issue, scariest, most damaging and easiest to do is, ESD or electrostatic discharge. And don’t tell me you ground yourself before every touch. Maybe you ground yourself before you remove it from the bag, but then you move about the room, you roll your desk chair, you pick up static. And for the record, just touching metal isn’t good enough, you have to touch grounded metal, so the metal legs of your desk ain’t going to do it. The second is a no-brainer, you can drop it. Now maybe you’re saying, “but I can drop an enclosed drive too!” Yes, but even the most basic of drive enclosures have some form of shock buffer and if they don’t then you should not trust your sensitive data on those either. The most overlooked is finger oils. Touching the circuit board of a drive with your bare hands is going to transfer oil to it. This oil can break down the drive’s components, but more likely will become a bonding agent for dust buildup that will insulate components and cause them to eventually heat up and fail. In addition, your finger oils can create electrical connections between components. Lastly, that hard drive connector was never made to withstand many inserts. It was meant to be connected a few times. It’s not USB. The more you use it, the more it wears, the cheaper the drive, the fewer times it can be connected. “But Vince, we always have another backup.” This one is laughable. How long will it take for you to access that backup? I’ve been to facilities where that backup bare drive is safely stored offsite and a cut need to go out tonight. You’ve just shocked your bare drive and now you have to spend a few hours of time retrieving the backup. A team of AE’s editors and producers are on the clock for those few hours of retrieval. Now I know you’re saying “ bare drives are cheap and convenient, and I don’t have to keep all those power supplies around .” So you’re going to start acting like a technician and stand on an antistatic mat, with an ESD wrist strap and wear ESD gloves. Plus you’ll keep isopropyl alcohol on hand just in case. But even if you did that, drives out of enclosures are still more susceptible to vibration and the drives little vent holes are more likely to attract dust and dirt. That’s right enclosures have vibration dampening and obviously provide a further barrier to dirt. Bottom line, handling a bare drive is akin to Russian roulette. 3. You can’t create ProRes on Windows Apple makes ProRes, so there is no reason for them to make it available on a Windows or Linux platform, right? Kinda. Unless you show them the money. Yes, Apple developed and owns ProRes. And Apple is very protective of this wonderful codec. Apple licenses 3rd parties to use ProRes. And Apple is very selective on who get’s this great honor. Normally, its hardware implementations of ProRes – that is, done on a dedicated hardware chip as opposed to in software. This could be a camera or a capture card As Apple itself doesn’t make any hardware encoders to create ProRes, this wouldn’t introduce any competition to them. If it IS done in software, then it’s normally in a software solution, which would not otherwise compete with a potential Apple software sale or, is a higher end piece of software that makes buying a Mac to accomplish the ProRes transcode a viable option. Now, there is a market for unauthorized implementation of ProRes on Windows. You’ve probably seen them advertised on forums while Googling a tech question. Many of these companies are now out of business because Apple lawyers don’t mess around. On top of that, with unauthorized implementations, there is no guarantee the file will have the same visual quality that an authorized implementation may have. It also isn’t guaranteed to play everywhere, or have a long shelf life of the creating software disappears. If you’re in a post-production environment that needs to generate media that goes through QC, you may fail QC by using one of these unauthorized apps. Apple maintains a list of Apple ProRes Authorized Products. Sadly, they don’t break it down by OS, so it’s not always clear what platform Apple has authorized. So, yes, you can create ProRes on Windows, but it’s probably gonna be expensive or unauthorized. So choose wisely. 4. Editors only need to know how to edit Editors today need to know how to edit in every popular NLE on both Windows and OSX. They need to have at least a basic understanding of how the systems work and how data gets in and out of them. In addition, Editors need to have a basic understanding of sound clean up and color correction. This is at a minimum, Editors should have a thorough knowledge of Photoshop and After Effects. What? Vince, You’re crazy! I do fine without any of that. Maybe you do. But when a producer needs a problem fixed and the editor in the bay next to you can come to the rescue, who do you think they are going to hire back first next season? You need to be a swiss army knife. Yes, it’s great for cutting, but it can do all this other stuff in a pinch. Today many producers have smaller budgets and they are looking for one-stop shops. All this knowledge allows you to be that shop for smaller indie projects and development presentations. And when those producers grow and move on to bigger better content they are going to bring their savior, their swiss army knife with them. But wait, if I’m going to do all that I need more money. Maybe, but with computers making edit decisions and post being farmed out to other countries, the only way to stay competitive is to offer everything they need in one stop. Now you’re probably saying, that’s a lot to learn, where do I start? I’d start by exploring all that your current NLE can do. Become an expert with the color correction tools, both curves, and hue wheels. Jump into the sound capabilities and examine the EQ. Break back into that effects drawer, notice the star wipe is gone. It blew my mind and I had to create one in After Effects. After you’ve become an expert at your NLE of choice, then become an expert in a competing NLE. Ask your AE’s about their workflow, and teach them yours. Keep moving from program to program and before you know it, you’ll be creating star wipes in seconds. 5. Video needs to be action and title safe OK, everyone knows what title safe and action safe are, right? Refresher: Some NLE’s call them “guides”. These guides are a holdover from the days before flat-panel televisions when CRT tubes are what every household had. CRT tubes curve, which caused portions of the video on the screen to distorted or not even visible. Action safe – the largest guide – was meant as a guide for tv and filmmakers to know when action may not be visible to end-users TVs. Don’t go past that guide for important action, and you’re OK. Title safe and action safe (click to enlarge) Title safe, as the name implies, did the same for titles. It’s a smaller area as due to the curvature of the edges of the tube, text can appear distorted. So, the rule of thumb was to keep the titles in this area. But, you may ask, “Michael, how is this even relevant now that everyone has flat panels?” And you’d be right. And then you might ask “ But if don’t need to use the guides, can I go right to the edge of the screen?” And I’d give you a mean side eye. Even flat panels have what they call over-scan, or areas of the video image that the end user doesn’t see, and how large this area is can, unfortunately, differ from manufacturer to manufacturer…and country to country. You obviously don’t want anything important in this area for fear it may be lost. And in terms of aesthetics, having anything important near here is usually discouraged against. So, nowadays (and I’m looking you, YouTube folks) , keep everything within action safe. All modern flat panels can display it, and you won’t run off of the screen. As for personal preference, I like keeping text still within title safe, as I find it easier to read than having the text extend to the end of the screen. Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe. Thanks for watching. Special Thanks: Vince Rocca Online: http://vincerocca.com/ YouTube: VinceRocca…
You can follow along with these two workflow documents: 5 THINGS Post Production Workflow PDF 5 THINGS Distribution Workflow PDF 1. Pre-Production 5 THINGS sprang from my love of technology, but also the realization that the proliferation of creative tools has clouded the understanding of much of the underlying reasons for that technology. I also knew that this was a niche concept in a niche market, so I’d have to go it alone, so much of the workflows behind 5 THINGS come from me. First, it starts with coming up with an idea. Something that not only interests me but is also a hot topic in the industry. I also consult with Moviola , who has sponsored the series this season and also premieres the episode as a webinar once a month. Moviola wants to ensure that the episode topic is relevant to their viewers, so that gives me some framework to ensure the episode is applicable. The one webinar a month paradigm also gives me a timeframe to plan production and post-production by. It’s then brainstorming time, writing out the 5 major questions of the video, along with ideas for b-roll for each point. I have a word doc template that I then begin to populate with the ideas in a narrative and conversational fashion. I then simply start dragging my knuckles across the keyboard and grunt approvingly as the episode starts to take shape. This is what will become my script and teleprompter feed. Because much of what I talk about is reliant on exact speeds and feeds, it’s important that my tech info be as correct as possible, so I traditionally don’t ad-lib the episode as much as I’d like. It’s also at this point that I’ll call on some industry friends – other creatives and technologists, and sometimes manufacturers, to discuss some of my viewpoints to ensure the info I’m pontificating on is accurate. Lastly, it’s time for the cutaways. Being raised on media, quite often my normal thought process recalls clips from movies and tv shows and many of the cutaways are simply a natural extension of my cognitive process. I can’t help it, it’s how I’m wired. Put all of this in a blender, and I get six to ten pages of tech goodness. Now, it’s time to shoot. 2. Production I shoot 5 THINGS at my home office. A modest 13×13 room that allows me to do 5 THINGS on nights and weekends. However, a room of this size does present some challenges. 13″x13″ room layout; camera and light at bottom left. First, let’s start with the look. Given the shortness of the room, having multiple cameras set up would be tricky. So, I decided to shoot with a 4K camera, and release in HD. This allows me to have a medium shot and a close-up from one camera. This also means my production lighting requirements were reduced, as I didn’t need to take into account lighting for other angles. I wanted a shallow lens for a little bit of bokeh ; a traditional video lens would have most everything in focus, and the frame would be very busy. The Sigma 1.8 18-35mm was a great choice. It’s also a fast lens so I don’t need a ton of hot lights in a small production space. So, I needed a camera that could shoot 4K, and had removable lenses, and wouldn’t break the bank. I choose a Blackmagic Design Production Camera 4K . Next was the teleprompter. Given the small size of the camera, and the less than constant use, I choose a modest prompter – the Flex Line from Prompter People . Easy to set up, and lightweight so I didn’t need a heavy-duty tripod. It’s also adjustable and had a large enough monitor that I could plug my laptop into it and run the prompter. Prompter People also has the Flip-Q software to run the teleprompter . Prompter People also has a sister lighting company called Flo-light . The lights are very inexpensive, but not the most rugged. A 220AW , with some diffusion, and situated directly behind the camera lights my face just fine. It’s not as artistic as I’d like, but the room constraints somewhat limit my options. Speaking of lights, I also have a 1 500 lumen LED light behind me, a simple clip-on light to an old boom stand. This not only gives me a hair light, but the white ceiling becomes a great faux bounce card to illuminate the room. Blackmagic Cinema Camera Recording Settings Now, because I am a one-man production band, I need to see the shot while I’m in front of the camera. I use the cameras HD-SDI out to an HDMI converter and run that that 27” computer monitor. This allows me to check framing, lighting, and that the camera is actually recording. As for sound, I use a Sennheiser wireless lavalier mic transmitter and receiver and run the signal directly into the camera. As I do cut sound for indie projects on occasion, I do have some dampening up around the room. Sadly, it’s not enough, and if you listen to the audio version of this episode, or with headphones, you can still hear the slight reverb in the room. Lastly, I gotta put on my face. That includes makeup. I then shoot, and the footage records directly to an SSD in camera. I record in ProRes 422HQ at UHD resolution at 23.98. I like the film look 23.98 gives, coupled with the short depth of field with the lens. Plus, most of the episode is a talking head, so there isn’t a need for a higher frame rate. As a fringe benefit, the media files take up less space and is easier to stream for viewers. As for dynamic range, I shoot in a video look as opposed to film, to reduce the amount of time in post spent on color. 3. Post Production As yes, the warm bosom of post production. It’s home, ya know? During this segment, feel free to follow along with this handy dandy post production workflow flowchart. Ingest and Offline workflow for 5 THINGS As soon as I’m done shooting, I take the raw camera SSDs and mirror the footage to a second SSD; for backup. Once mirrored, I take a Premiere Pro template I’ve created for the 5 THINGS series and ingest the UHD ProRes 422HQ footage into the project. Given the newer proxy workflow in Premiere Pro , I no longer need to manually create proxies and relink during the online – Premiere does it for me. I flip the files to ProRes LT at UHD, maintaining the 23.98 frame rate. I’m often asked why I don’t offline in a smaller codec or at a smaller frame size. The answer is pretty simple: I don’t need to. The amount of footage is small enough that I don’t need to save a ton of space, and I like the UHD resolution so I know how much I can safely punch in for my HD deliverable. It also gives me the media format I’ll ultimately archive to. What?! Yeah, I wrestled with this decision for a while….but let’s be honest here, this is a web series. There is little reason I need to have uber high res masters of the raw footage after the episode is complete. My final output will be from the camera masters, and the need to go back to the originals is slim and none. Plus, archive storage constraints are always a concern. I would not recommend this for most folks, especially broadcast professionals, but for me, I was willing to live for the quality tradeoff for archival material. By now, the proxies are created, and I transfer them to a several-year-old (and soon to be retired) Synology 1512+ 20TB NAS . This offers me a backup of the media and redundancy in the event a drive fails. I also keep the raw media on the SSDs until the project is done. I also put the proxies and portable bus-powered SSD – a 500GB Glyph ; this allows me to edit anywhere, easily. My main edit system, until recently, was a 2013 MacBook Pro . It had an nVidia GeForce 750M GPU, and had 16GB of RAM. Starting with this episode, I’m using a new 2017 MacBook Pro with a CalDigit USB-C dock. I’d run the HDMI out to a consumer 42” UHD monitor, which I used a Spyder Pro to calibrate the color on. I can now begin to select my takes and assemble my cut. I mark my good takes in Premiere and do a string out. Supplemental Media Workflow for 5 THINGS Once my talking heads are strung out, I go into After Effects and generate the questions before each segment, as well as the lower third factoids. I attempted to work with the Live Text option from After Effects to Premiere , but too often I found I needed to adjust the parameters of the text that live text didn’t give me options to. So, I do the questions and lower thirds in After Effects, and export them with an alpha channel, then use them in my Premiere sequence. I also use Light Leaks from Rampant Design as transitions between segments. Next, I track down the film and TV cutaways I need, and re-wrap them with a Quicktime wrapper or flip them to a more edit-friendly format. This is also the stage where I’ll incorporate footage or effects from other NLEs. I’ll also use Telestream’s Screenflow for screen recordings. ScreenFlow allows me to zoom in to screen recordings without much loss in quality, so I can point out features or details in whatever I’ve recorded. I’ll also use Boris FX Sapphire plugins for the dirty video effects . Sapphire plugins are very intensive, so having all of the media in a more edit friendly format makes previewing the effects that much easier. Of course, during this time, I’m backing up all of my media up to their respective data parking lots. Once the rough cut is done, I render the sequence and export a ProRes file using the rendered previews. Why? Using previews allows me to go back into the sequence and make changes and fix mistakes without having to re-render the entire sequence during export. Color and Graphics work workflow After the render, I’ll export the sequence in Adobe Media Encoder, same as source, and upload it to YouTube, as unlisted. Why? Rendering to an h.264 takes up a lot of CPU resources. I want to continue working. Same as Source is a quick export, h.264 is not. YouTube will take ProRes files no problem, the upload will just take longer and in the background. I then share this unlisted link with my friends at Moviola for input, as well as for Thomas, the motion graphics dynamo at Moviola, so he knows the timing for his motion graphics. Thomas will do 3-5 motion graphics per episode if needed, to add a bit more eye candy than just my overactive hand movements. Izotope tools used in 5 THINGS Color is not one of my strong suites . Because of this, I’ll often enlist the help of Jason Bowdach at Cinetic Studios to do a corrective grade on my talking heads. I shoot him over a few DPXs of my talking head, and he rescues me from the daredevil red hue that comes from this room. I use several Izotope tools for any audio cleanup I may need, and to EQ and compress my audio for ease of listening. It’s now time for the conform, and the proxy workflow in Premiere makes this very easy. I simply toggle from low res to high res, and just as before, I’m ready to render. I take the rendered sequence and export the cut from Adobe Media Encoder using Previews, and I now have my master file. 4. Distribution This is where things get really hairy, and where I spent a huge chunk of time, and I would kill to automate this. During this segment, feel free to follow along with this handy dandy distribution workflow flowchart. I currently have 5 distribution points for 5 THINGS : Moviola, YouTube, my website, Roku, and iTunes. Distribution via Moviola Workflow Moviola is relatively simple. I generate an h.264, using a slightly tweaked Adobe Media Encoder YouTube 1080p preset. Moviola has a proprietary VOD platform, so the best I can do is give them a high-quality file delivered via the web that they can work with. Moviola handles all of the metadata for their system. YouTube , as I mentioned earlier, gets a ProRes file. While this lengthy upload happens, I start to work on thumbnail artwork, which will become the basis for every distribution platform. I’ll also begin to enter in metadata on YouTube and do some research on what kinds of keywords are working for similar videos. I’m a huge proponent of education and being accessible to as many people as possible. To this end, I ensure 5 THINGS gets captioned. It also reportedly helps with S.E.O., although to what extent I don’t know. I have a workflow hack that gets YouTube to take your script and time it to your video for you; free. This becomes your captions. After YouTube times the captions, I download the SRT caption file, as I’ll be using it for all subsequent encodes. 5 THINGS website workflow I then take the transcript of the episode and use it as the basis for a blog post at staging.5thingsseries.com. This allows a visitor to read the information in the event they don’t want to watch or listen, as well as, again, being good for S.E.O. I take screen grabs from the video to punch up my potentially dry transcript. As the video is already on YouTube, I simply embed the video at the top of the blog post so viewers can watch on my site or on YouTube. Roku Workflow for 5 THINGS Now Roku is a difficult beast. As Roku is finicky about channels linking to YouTube videos, I need to go to the lengthy process of creating streamable media for my Roku Channel . I choose to go with Apple HLS for the streaming media format for Roku as it’s the most flexible and forgiving on various platforms. This generates segments of each episode in varying degrees of quality, so the end user’s player can automagically switch between resolutions depending on their available bandwidth. Generating this media format – plus the m3u8 manifest file – is something not a ton of encoders do easily. I settled on Sorenson Squeeze , because it creates the media and generates the m3u8 playlist file. However, there is a slight gamma shift, which means I need to do some small tweaks to the color on encode. BIF files for Roku This is obviously not ideal. Tests with earlier versions of Compressor 4 yielded several errors, so when time permits I intend to experiment with newer versions of Compressor to streamline the process and to potentially move from Sorenson to Compressor. While the encode is happening, I take the master ProRes file and run it through a little program on an aging PC called BIF Video File Creator . A BIF file, or Base Index Frame , is a file that Roku uses to show thumbnails of frames as you rewind and fast-forward through the episode. It’s a little eye candy that makes scanning through an episode easier for the viewer. For graphics, I take the thumbnail that I created for YouTube and resize it to fit the graphics requirements for my standard definition and high definition Roku channel. 5 THINGS Roku Channel My Roku channel pulls all of its data from external resources via XML. This makes it easier to update, and easier to use a 3rd party media host like Amazon S3. Thus, I need to hard code the episode metadata into the XMLs, as well as then link to the media, caption files, and thumbnails of the episode. I then upload all of these files to my Amazon S3 bucket. It’s then Q.A. time, and I load the channel up on both my Roku 3 and Roku 1 to ensure HD and SD look and play in an acceptable manner, and that the thumbnails, captioning, and BIF files perform as expected. iTunes workflow for 5 THINGS Next, it’s iTunes, which is a new addition for 5 THINGS, in fact, it may still be under development when this episode premieres. I have 5 THINGS as 2 podcasts – 1, the video as you watch now , and 1 that is just the audio portion of the episode , as I’ve had requests from folks who want to listen on car rides or during other activities where watching isn’t possible. I use Compressor to generate the 720p h.264 files for the video portion, as well as the MP3 version for the audio podcast. I use a great app called Subler to insert the SRT caption file into the 720p podcast file, as well as insert the plethora of metadata into the file. BluBrry has a fantastic WordPress plugin that maps metadata from your blog post directly to iTunes. This allows me to upload my iTunes media to Blubrry and have 1 place for the metadata to pull from for both podcasts… so at least that portion is somewhat automated. 5. Improvements There is always room for improvement… and so there are several things I’d change. One is more dynamic lighting, rather than just a boring, and somewhat evenly lit and occasionally blown out face. I’d also love even more motion graphics to illustrate the tech talk I’m making on camera. Less of me, more of the things I’m talking about…without a basic slideshow or a Ken Burns effect over a still image. Automation for distribution is a biggie. The various platforms all requiring specific media formats and metadata. Most online distribution platforms that connect to these VOD outlets are really not meant for the indie person such as myself, so a 3rd party service really isn’t in the cards for now. Lastly, I wanna work with you. Wanna be a part of 5 THINGS? Hit me up. Episode suggestions, comments, or offers of collaboration are always welcomed. Have more workflow questions on the web series? Maybe some improvements? Ask me in the Comments section. Also, please subscribe and share this tech goodness with the rest of your techie friends. Until the next episode: learn more, do more. Like early, share often, and don’t forget to subscribe. Thanks for watching.…
مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.