We are taking an important leap toward a public release of our latest product, RADiCAL Live. We have now released the RADiCAL LIVE Connector for NVIDIA Omniverse to enable real-time, multiplayer 3D motion capture inside Omniverse, for everyone, everywhere, from any device.

Omniverse is based on Pixar’s Universal Scene Description and NVIDIA RTX technology. The platform enables universal interoperability across different applications and 3D ecosystem vendors, as well as provides real-time scene updates. It is designed to act as a hub, enabling new capabilities to be exposed as microservices to any connected clients and applications.

 

About RADiCAL Live:

The RADiCAL LIVE cloud platform powers software-only, massive scalable, high-quality remote 3D skeletal reconstruction, character animation and user virtualization for practically unlimited participants in shared virtual spaces.

Features include:

  • Real time 3D motion capture from 2D video: RADiCAL’s proprietary AI generates high-quality 3D skeletal motion data from a single, real-time 2D video feed.
  • Multiplayer: Our proprietary cloud-based multiplayer solution enables shared virtual spaces in Omniverse. Every participant can see themselves and every other participating actor in the shared 3D space, in real time.
  • No special hardware / software: RADiCAL LIVE runs on any internet-connected consumer device equipped with a 2D camera – no trackers, suits or dedicated hardware required. LIVE works across the entire consumer hardware and software landscape (desktop, laptop, tablet or mobile device). You will preserve nearly 100% device-side compute capacity, making it available to support graphics rendering and application logic.
  • Setup and preparation: RADiCAL Live requires no special setup, calibration of cameras or constrained environments. Just ensure decent lighting, full body visibility, a 1 second calibration and that you’re alone in the frame. That’s it.

 

Access LIVE – developer account:

We intend to release RADiCAL LIVE to our community soon, entirely self-service, easy to use, and massively scalable.  Until then, the LIVE platform is available through a developer account that we grant to customers and partners upon request.

If you don’t have a RADiCAL developer account, get in touch.  We’re happy to offer it to as many users as we can, as fast as possible, but we need to coordinate cloud resources to enable a smooth and seamless experience for everyone. We’re therefore sequencing the rollout according to use case, expected engagement, and a few other metrics.

 

Access LIVE in Omniverse – without a developer account:

Until you have a developer account, you can still try out what it looks and feels like, using a simulated live data stream.  Check out the FAQs (incl technical guide) here.

 

 

Yesterday, 30 August 2021, we released our latest AI, version 3.2.10.  This AI update makes significant progress on the key metrics of better understanding and reconstructing movement through depth and its relationship with the floor:       

 

1.  Floor contact (footlock): Our AI now has a better, explicit, understanding of the relationship between the actor and the floor in the scene.  This produces more consistent, stable results with respect to the feet making contact with the floor.

 

2.  Spatial trajectory: Our AI is now better able to detect and reconstruct movement through space, specifically depth. As a result, you’ll see fewer unnatural global oscillations in spatial trajectory in our results.  And where they still exist, they are consistently less pronounced.

 

These changes, in turn, also produce subtle, but noticeable improvements to fidelity (detail), smoothness and stability (ie, even less jitter, chop and jerk).

 

Update your results:

As ever, results you have previously produced using an older version of the AI can be updated with a single click from the right-hand sidebar in your scene.

Much more to do:

Our aim is to hold our footlock / floor contact and spatial trajectory metrics to the highest standards of the industry.  We believe we can get there, and we know there’s work yet to be done.  There will be many more of these releases, and corresponding improvements, going forward.

 

 

We’re excited to announce an important update to our AI: Gen 3.2Gen3.2 comes with improvements in these areas:

    • Fidelity: You’ll see more detail across a wider domain of motion and video categories  
    • Smoothness / stability: Even more organic stability with even less jitter, choppiness and jerkiness, even when faced with more challenging videos 
    • Input tolerance: We now understand a wider range of camera angles and aspect ratios across your uploads 
    • Fixed the hunch: we fixed a common issue that produced a “hunching effect” in certain videos. 

 

Paving the way for more:

This update represents an important leap in that we’ve been able to peel away legacy constraints.

Beyond making our results more nuanced and stable for the results we produce right now, we’ve also opened the doors to a number of improvements that are in the pipeline for a release to our entire community soon, including a wider motion domain (motion categories), fidelity, improved footlock, stability, as well as real-time performance for everyone.  

 

Best practice -> best results:

Because our AI is so much more resilient, providing greater input tolerance and stability, you can use a much wider variety of videos.  That said, for the best possible results, we recommend you continue to observe these pricinples: 

    1. Single actor: details
    2. Full body visibility (don’t leave the frame): details
    3. Good calibration: details

 

Fast updates: 

Note that results you have previously produced using an older version of the AI can be updated with a single click from the right-hand sidebar in your scene.

FBX results: Blender add-on / Unreal asset pack:

Gen3.2 comes with small changes to our standard skeleton. For most users, these changes are trivial and will either be obvious or trivial when applying our FBX-formatted results in their own software environments.

However, if you’re relying on our Blender add-on or Unreal asset pack to use our FBX-formatted results, please note that we’ve updated both. The new Blender and Unreal integrations are available for free through our downloads page (along with the previous versions, for backward compatibility) .

 

We’re celebrating:

We’re celebrating Gen3’s anniversary and the release of Gen3.2. with an upgrade code for all annual plans.  See more here.

 

Our friend Jacob Ssendagire (Instagram) has developed beautiful content using RADiCAL and Cinema4D.  He’s now prepared this great tutorial making it easy to apply our FBX-formatted results to characters in C4D.

As a reminder, you cn use the free T-pose rig to help you map to your character before re-targeting your RADiCAL animation to that character.  You can acess the T pose file for free through our downloads page.

Thanks Jacob!

13 May 2021

We’ve updated our AI to version 3.1.11 in RADiCAL Core.  Changes include:  

    • Improved fidelity (detail) and stability. 
    • Expanded input tolerance by allowing for a wider range of camera angles and aspect ratios. 
    • Fixed a common issue that produced a “hunching” effect in certain videos. 

We recommend you continue to observe best practice, despite greater input tolerance and stability: 

    1. Single actor: details
    2. Full body visibility (don’t leave the frame): details
    3. Good calibration: details
    4. Aspect ratio: 4:3 (landscape): details

 

Fast updates: 

Note that results you have previously produced using an older version of the AI can be updated with a single click from the right-hand sidebar in your scene.

Studio:

The AI update will be rolled out to RADiCAL Studio in due course (check announcements).

BlenderDaily’s has posted a quickfire tutorial with additional tips for Blender users on their Instagram channel.  Great work by BlenderDaily. More tips about using RADiCAL in Blender can be found in our FAQs re Using RADiCAL’s MoCap Data.

 

We’re grateful and excited that we have received an Epic MegaGrant.

We already provide a real-time deployment for UE4 through RADiCAL Studio. With the support of this grant, we’re working on democratizing our product further.

We’ll soon release RADiCAL Live to our entire community of about 80,000 content creators, meaning it will no longer be limited to enterprise customers.  RADiCAL Live is a cloud deployment of our AI, accessible to everyone, everywhere, for real-time remote 3D animation and virtualization, right from your home device (regardless of what it is) to enable virtual production, game development, game play, digital art and motion capture.

 

 

For users working with Unreal Engine, RADiCAL Live will come integrated with UE4 (soon, UE5) LiveLink. This means you’ll be able to ingest live animation data coming from RADiCAL’s cloud servers directly into your local machine running Unreal, both inside the editor and packaged apps.
*    *    *

RADiCAL Live animation data will also be available through our website (in WebGL) and, soon, Unity.  We’re also looking into real-time integrations for Blender and iClone, as we’re fans of both, but we’re still evaluating the requirements and timeline for those.  If you’re interested in playing a role in any of these integrations, as a developer, tester or to give advice of any nature, please do get in touch.

Toward the metaverse.

Thank you, Unreal and Epic Games.  It means the world.

– Team RADiCAL

For many of our users, it is important that their results come with stable and realistic contact between the character’s feet and the floor. We’ve therefore been working hard on a solution we call “footlock.” 

 

We’re still working on it. To be precise, we’re right in the middle of it. 

 

But we’re now able to release an experimental version, in public beta, in the form of a post-processing footlock layer in RADiCAL Core. The post-processing layer produces decent, and sometimes great, results across a number of use cases. However, because the footlock layer may not work for all users across all use cases, we’ve decided to provide a choice: you can see your results both with and without the footlock solver enabled.

 

This footlock solver is available automatically for all scenes created in RADiCAL Core after December 8, 2020. It remains experimental and we’re continually improving it. To understand how best to use it, where it might do well, or fail, please consult our Learn section

 

Switching to footlock: open a new tab within your scene

 

For scenes created after December 8, 2020, you can choose to view your results in a separate browser tab, but within the same scene, with the “footlock” solver enabled.  To do this, click the “Switch” button within the “footlock” section on the right sidebar of your scene.

 

Your viewing mode is reflected in the “current view” status. To see the solver results, make sure that: 

  • Footlock: Available
  • Current view: On

 

 

FBX downloads: current view mode + file naming convention

 

The FBX generate + download buttons will give you the FBX file that corresponds to the “current view” mode.  

 

Once downloaded, you will know which version you’re looking at by checking for a suffix in the file name that looks  like this: _fl. For example: 

  • Conventional FBX:  gen-3-1-samples_scan-005 
  • FBX with footlockgen-3-1-samples_scan-005_fl

 

Available in Core – other products to come

 

The footlock solver is currently available as an optional view as part of the Core product (and certain Core API users). It, or a subsequent update, will also be rolled out to Studio – keep an eye out for announcements.

 

.  

We’re excited to announce the launch of our Core API.  Now, developers and enterprise partners can create applications around RADiCAL’s cloud-based motion capture.

 

The API allows our partners to track motion and animate characters in custom user experiences, programmatically, at runtime. 

 

We maintain your own private API to upload videos and download animation data.  You can use your own cloud resources or run your pipeline through RADiCAL’s end to-end cloud infrastructure – ie, we will seamlessly process your videos and deliver results to your users.

 

Beyond the API and cloud resources, we also provide technical support and code to connect the API and programmatically apply the animation data to characters inside end users’ 3D clients.

 

If you’re interested in learning more, just get in touch through our dedicated channel for developers and enterprise partners: https://getrad.co/contact.

 

*      *      * 

 

-Team RADiCAL

 

After carefully considering feedback from our community and the industry at large,  we are excited to offer the Studio Creator package.

 

  • Use for free, no limits: anybody with a RADiCAL account can now install and use Studio to animate and visualize as much motion as they want, free of charge, no credit card required.

 

  • Download FBX exports – pay only for what you need: We’re also launching our pay-as-you-go system (PAYG) as part of the Studio Creator package. If you decide that you want to export animation data from Studio, you can purchase one minute increments of export time that you can use whenever you want. You only pay for the results you need. Left-over credits won’t expire for a year.

 

  • Download directly from website (we’re leaving Steam): Studio is now available directly through our website here and here. If you previously downloaded Studio from Steam, no worries, you can simply delete the Steam version and replace it with the version downloaded from our website.  Your results will still be in the folder where you saved them.

 

  • Annual Producer – lower price – unlimited FBX exports:  we’re dramatically reducing the price of our Studio Producer package to $250, featuring unlimited animation data exports.  Check out our updated pricing page here.

 

We are doing this in the hopes that all creators, regardless of resources, will be able to use RADiCAL Studio in their content pipelines.

 

No subscription.  Free to download.  Free to use.

 

P.S: many of you also asked for a Studio tutorial, which you can view here.

 

*                       *                        *

 

– Team RADiCAL

14 Oct 2020

Yesterday, on October 13, 2020, we released an important update to our AI: version 3.1.7.

 

What you should know:

  • Impact on AI results – reduces oscillations: in terms of visible results, the improvements are subtle, but critical.  Specifically, v3.1.7 significantly reduces certain oscillations on the Y axis (the vertical axis). In previous versions, these Y axis oscillations were mostly correlated with motion that involved the actor raising the arms over the head, often from a simple standing position. In all of our tests, these specific oscillations are now significantly reduced, if not entirely gone.   
  • Footlock – preparing for an important release: the v3.1.7 update should also be seen in the context of our wider efforts to enhance our AI for a more solid planting of the skeletal animation data in relation to the floor.  Version 3.1.7 lays part of the foundation for improved footlock.  With v3.1.7 now in production, we hope to release a first major update improving footlock, coupled with an improved positioning of the root motion in world space, in 2020.     
  • Calibration – still important: even after this update, you should minimize oscillations – and secure fidelity, plausibility and aesthetics – by executing a solid calibration. Here is a reminder of the specific advice we provided on calibration cycles a short while ago.

 

Here’s how it’s being rolled out:

  • Core: Our cloud AI, available through RADiCAL Core, has already been updated to v3.1.7.
  • Studio: our RADiCAL Studio users should expect to see the (free) upgrade this week. Steam should be updating your app automatically (please check your Steam update settings).

 

Special thanks to the AI team, who have pulled another major achievement out of the hat.  

 

*     *     *

 

As always, thanks for being a part of our community, and don’t hesitate to reach out, we are always looking for constructive feedback.

 

– Team RADiCAL

 

We’ve released plugins to make the re-targeting process faster and easier for Blender and Unreal users.

After months of innovation and tireless work by our team, the RADiCAL Studio is finally here.  It’s available through Steam here (you’ll need to sign up for a Studio product to log in). Before we get into the details, here are some quick pointers to materials we’re covering elsewhere:

 

  • Early bird pricing: We’re offering early bird pricing (>50% off) for a short time: details here.
  • Free trial: Studio comes with a free trial (no credit card required):  details here
  • Known issues: This release comes with a few known constraints and issues: details here

 

Studio brings our AI to your machine: 

 

With Studio, you are untethered from the cloud and have access to unlimited motion capture in your own home, studio or event space. At heart, the RADiCAL Studio brings our AI to your local machine. We call it step-by-step (SBS) processing.  SBS processing mimics the way we sequentially process your videos through the cloud: record video first, run the AI later.  

 

The big difference? Since this is your own workstation, we don’t have to meter your usage.  Your usage is only limited by your own time.  🙂  

 

Real time results (beta):

 

With Studio, we are also revealing our real-time functionality.  The real time feature is the product of multi-disciplinary efforts across not just deep learning, but also GPU optimization and a lot of great software engineering. It remains in beta because we still need to test and stabilize the AI’s output across a wider range of hardware configurations.

Our final objective, even with real time processing, is to achieve the fidelity, smoothness and range of motion Gen3 is capable of through the cloud.

At this time, we recommend that the real time feature is run on Windows machines with the strongest NVIDIA graphics cards (at least 1080, 1080 Ti, 2060, 2070, 2080, or 2080 Ti), although we’ve seen it do reasonably well on even smaller cards (1060 and 1070).

 

Live stream into Unreal Engine (Live Link):

 

With the right subscription, you can also use our real-time feature to stream your motion directly into a scene in Unreal Engine 4 (Unity, iClone, and Blender coming soon). For more about using UE4 LiveLink, go here (Change Log) and here (FAQs). 

Tip: if you’re a student or an indie, you may qualify for discounted pricing on LiveLink access, please get in touch.

 

Exporting your animation data:

 

Whether you use step-by-step (SbS) or real-time processing, you can export your animation data in FBX format through our website here. You can read more about how that works here (FAQs).

Tip: if you’re using the UE4 Live Link for real time streaming, your animation data can also be saved directly in UE4. 

 

Gen3.1 – new animation rig:

 

Studio also comes with Gen3.1, which features a new and improved using animation rig. The 3.1 skeleton more closely conforms to industry standards. It’s much easier to be used across modern and legacy workflows in Unity, Unreal, Blender, iClone and others.  See the details in this change log post

 

 

*     *     *

 

As always, thanks for being a part of our community, and don’t hesitate to reach out, we are always looking for constructive criticism and feedback.

– Team RADiCAL

Yesterday, September 4, 2020, we released the latest update to our AI: Gen3.1. With the release of Gen3, we signaled that all areas of our product were going to improve, including our FBX output. As the versioning suggests, while this is an upgrade to 3.0, version 3.1 doesn’t imply fundamental, visible, changes in our output.  Rather, it’s the structure of our animation data that has improved.

 

Key benefits of Gen3.1: 

Gen3.1 features an updated skeleton with more joints and a new naming convention that more narrowly conforms to industry standards.  As a consequence, 3.1 improves the ingestion of our animation data across software environments, whether that’s in FBX format or as raw animation data.

In short order, we will also be releasing plugins that make the retargeting process for Blender, Unreal, and Unity even easier.

 

Transitioning from 3.0 to 3.1: 

We understand many of our users have developed pipelines in reliance on the RADiCAL skeleton having a particular structure.

To help ease the transition, we’ve made a new 3.1 t-pose available in the download section.  You can also see a diagram of the new skeleton and the naming convention below.  As you start to align your pipelines for 3.1, we can promise that we don’t expect to make structural changes to our skeleton going forward.  3.1 will be our standard for years to come.

 

New RADiCAL Samples: 

New RADiCAL Samples with free FBX downloads can be found here.

 

How to export legacy animation scenes:

If you need to export FBX animation data for legacy Gen2 or Gen3.0 results, you can do so through our website for a period of one month going forward, ie from today through early October 2020. The user experience is the same: simply hit the FBX download button on the completed scene page.  After that initial transition period, from October 2020 onwards, exporting Gen2 or Gen3.0 results to FBX will require the help of the RADiCAL support team, so you should expect it to take more time. We therefore recommend you start exporting now.

 

Special note for Blender users: 

For Blender users, please select automatic bone orientation under the armature settings when you import the FBX.

Some users have reached out because they’re experiencing re-targeting issues with respect to our FBX in Blender, specifically they’re seeing some abnormal rotations in the skeleton.

We’re aware of the problem and have identified the root cause.  We’re now working on a temporary solution.  Please bear with us for the next few days while we generate a short video tutorial for the temporary fix.

You should also know that, hopefully within just a few weeks, we will release an add-on that will make re-targeting a drag drop process in Blender.

If you have specific thoughts on Blender, or these specific issues, feel free to drop us an email or book a meeting with our team.

As always, thank you!

Team RADiCAL

 

*   *   *

 

As always, feel free to drop us an email or book a meeting with our team.

Many of you have asked for the ability to process longer videos. We’ve heard you loud and clear.

We’ve increased the maximum duration, from 30 seconds to 15 minutes, for any one video you upload. We’re looking into raising that limit further, but we have more work to do on that topic. To support longer videos, we’re also making two related, and important, changes:

  • Playtime add-ons are now automatic: if you go over your remaining playtime credits, we’ll automatically add enough playtime to your account to get the job done and your account will be charged for the playtime add-on. This will be completely seamless. To make sure you have complete visibility into your charges, we’ve added your up-to-date playtime budget summary to the upload page, so you always know what to expect, given your workloads.
  • FBX on demand: converting raw animation data to the FBX format requires processing power. Rather than delaying your visual results, we’ll deliver your visual results without the FBX. You can then decide to request the FBX for the results you like. This means that you’ll get your visual results faster, but you’ll wait a bit longer for your FBX files.

 

Always consult our terms and conditions for details.  If you’re unsure about anything, drop us an email or book a video conf call meeting with our team.

As always, thanks for being part of our community!

– Team RADiCAL

We detected and fixed an issue today that, between June 13 and June 15, caused our web visualizer and notification systems to malfunction, such that many users were not told when their results were ready (even though they were).

We’re sorry if this has affected you. It has now been fixed, all of your scenes have been processed, and results can be viewed. If you were affected by these issues, please drop us a note at [email protected] so we can try to make up for it.

Thanks for your patience with us, as we roll out Gen3 across the entire platform.

Best –

Team RADiCAL

Our latest AI: Gen3 

 

Today, we are launching Gen3, the latest generation of our AI for our community of creators and developers. 

 

Gen3 has been a labor of love, skill and persistence. We’ve been on it for more than a year, because we knew that the Gen3 architecture would lay the foundation for a revolution in 3D motion tracking science. It is difficult to overstate how excited we are.  Not only does Gen3 provide far improved output, it also does so at significantly enhanced throughput.  So much so, that it’s now capable of being run in real time.  

 

With all those improvements now available, we’ll be releasing a range of new products, both in the cloud and for local (on prem) use.

 

RADiCAL consists of a small team of 3D graphics and AI enthusiasts.  We hope you enjoy the fruit of our labor as much as we do.  Below we have summarized just some of the highlights we want you to know about. 

 

Key features: 

 

RADiCAL is optimized for content creators, with the following priorities guiding everything we do: 

 

  1. Human aesthetics: because of our holistic approach to motion and deep learning, we’ve massively enhanced the human, organically expressive look and feel of our output, with smooth results that substantially reduce jitter and snapping; 
  2. Fidelity: Gen3 was designed to tease out much more detail in human motion than previous versions; 
  3. Speed: we want to ensure that our technology is capable of running in real time across most hardware and software environments. 

 

Going forward, Gen3 will support both CORE (our cloud-based motion capture technology) and new real time products (including an SDK) that we will announce and release shortly.  

 

While Gen3 has moved in massive leaps toward realizing those priorities, we also know that we have more work to do. More about that below. 

 

About our science:

 

There’s a lot of secret sauce in our science. But here’s what we can say: we’ve developed our AI to understand human motion holistically. Rather than creating a sequence of poses to create the impression of motion, we interpret the actor’s input through an understanding of human motion and biomechanics in three-dimensional space over time. In other words, our technology thinks in four dimensions: x, y, z and time.

 

We have more work to do:

 

As proud as we are of our progress, we want to do better in a few areas. One of our top priorities for the next few weeks and months is to better anchor our animations to the floor and reduce certain oscillations. 

 

We expect to roll out a first set of improvements within weeks, which should take us much closer to where we want to be in terms of reducing foot sliding  and oscillations. 

 

But we expect more work to be necessary after that. Those additional improvements will come with the next large release, in version 3.1 or 3.2.  We’ve already started to work on those improvements and we’re genuinely excited about making the results of our research public soon.

 

In the meantime, you can substantially mitigate these effects by following the guidance below.

 

How to get the best results:

 

To get the most out of our technology, you should: 

 

  • Static, stable camera: place your camera on a flat, stable surface (or a tripod, of course). Don’t adjust the zoom while recording. Don’t cut between different camera angles. 
  • Single actor: record a single person at a time; 
  • T-pose calibration:  ensure the actor strikes a T pose within the first five seconds with the entire body being clearly visible at a frontal angle to the camera; and 
  • Aspect ratio: record, use or upload videos with aspect ratios not wider than 4:3.  That’s because our AI only processes videos in a 4:3 ratio. While you can upload videos with wider ratios (we’ll crop them back automatically), you should keep your actor inside the 4:3 ratio to ensure they don’t get cropped out.    

 

Play nicely, and you’ll get best results!

 

* *

 

As ever, we’re forever grateful for the support of the RADiCAL community.  We’re excited about feedback, good and bad.  We’re even more excited about constructive criticism and assistance.  

– Team RADiCAL 

09 Jan 2020

As we prepare for the transition to Gen3, we have to make a few changes.  The core of our platform will continue to run, but we’re limiting access to Gen2 through free accounts via some of our apps until Gen3 is out. 

 

  • Website: Our website will continue to operate as usual, with access to your completed scenes (Projects) and community scenes (Explore). You can register as a new user, including for a free account, download FBX files, manage your account and upload new videos via our custom upload page for processing through Gen2, if you hold a paid subscription.
  • Windows app: Our windows app will continue to be available on Steam. It will operate as usual, and we will continue to maintain it.
  • Mobile apps / MacOS app: To prepare for Gen3, we are suspending downloads of our iOS, Android and MacOS apps from the app stores.

    If you’ve already installed these apps, you can continue to use them by accessing your completed scenes and community scenes. However, you will no longer be able to upload new videos into our cloud from the mobile apps. The mobile apps won’t prevent you from recording videos or initiating uploads, but the upload will not reach our servers (and you will receive an email to confirm this). We will no longer update or maintain these legacy apps.

    If you’re a paying subscriber, you can continue to upload new videos for processing through Gen2 via our custom upload page on our website.

 

We hope you bear with us.  The transition to Gen3 is a momentous task.  We’re excited about it, and we hope you are, too. 

 

– Team RADiCAL  

Over the last 9 months, countless users have asked us to release a feature that would allow them to upload videos into our cloud-powered AI that were recorded independently by them, i.e. videos that were not recorded through the RADiCAL mobile apps.

We’ve heard you loud and clear.

Starting today, you’ll be able to upload your own videos through our custom uploader on our website.

You can get to the custom uploader in two ways:

 

  • Web: From the members area of our website: -> hit the UPLOAD button -> hit “Custom Uploader” in the pop-up dialogue
  • Desktop Apps: From our desktop apps: -> hit the NEW SCENE button -> hit “USE EXISTING VIDEO” in the dialogue -> hit NEXT (opens up the browser)

 

You can use the custom uploader with any Creator, Producer or Professional subscription, monthly or annual.

This is work in progress, and we’ll continually improve on what we do.  Please get in touch if you have any questions: [email protected]

 

*     *     *

 

Team RADiCAL