We’re excited to announce an important update to our AI: Gen 3.2Gen3.2 comes with improvements in these areas:

    • Fidelity: You’ll see more detail across a wider domain of motion and video categories  
    • Smoothness / stability: Even more organic stability with even less jitter, choppiness and jerkiness, even when faced with more challenging videos 
    • Input tolerance: We now understand a wider range of camera angles and aspect ratios across your uploads 
    • Fixed the hunch: we fixed a common issue that produced a “hunching effect” in certain videos. 

 

Paving the way for more:

This update represents an important leap in that we’ve been able to peel away legacy constraints.

Beyond making our results more nuanced and stable for the results we produce right now, we’ve also opened the doors to a number of improvements that are in the pipeline for a release to our entire community soon, including a wider motion domain (motion categories), fidelity, improved footlock, stability, as well as real-time performance for everyone.  

 

Best practice -> best results:

Because our AI is so much more resilient, providing greater input tolerance and stability, you can use a much wider variety of videos.  That said, for the best possible results, we recommend you continue to observe these pricinples: 

    1. Single actor: details
    2. Full body visibility (don’t leave the frame): details
    3. Good calibration: details

 

Fast updates: 

Note that results you have previously produced using an older version of the AI can be updated with a single click from the right-hand sidebar in your scene.

FBX results: Blender add-on / Unreal asset pack:

Gen3.2 comes with small changes to our standard skeleton. For most users, these changes are trivial and will either be obvious or trivial when applying our FBX-formatted results in their own software environments.

However, if you’re relying on our Blender add-on or Unreal asset pack to use our FBX-formatted results, please note that we’ve updated both. The new Blender and Unreal integrations are available for free through our downloads page (along with the previous versions, for backward compatibility) .

 

We’re celebrating:

We’re celebrating Gen3’s anniversary and the release of Gen3.2. with an upgrade code for all annual plans.  See more here.

 

Our friend Jacob Ssendagire (Instagram) has developed beautiful content using RADiCAL and Cinema4D.  He’s now prepared this great tutorial making it easy to apply our FBX-formatted results to characters in C4D.

As a reminder, you cn use the free T-pose rig to help you map to your character before re-targeting your RADiCAL animation to that character.  You can acess the T pose file for free through our downloads page.

Thanks Jacob!

13 May 2021

We’ve updated our AI to version 3.1.11 in RADiCAL Core.  Changes include:  

    • Improved fidelity (detail) and stability. 
    • Expanded input tolerance by allowing for a wider range of camera angles and aspect ratios. 
    • Fixed a common issue that produced a “hunching” effect in certain videos. 

We recommend you continue to observe best practice, despite greater input tolerance and stability: 

    1. Single actor: details
    2. Full body visibility (don’t leave the frame): details
    3. Good calibration: details
    4. Aspect ratio: 4:3 (landscape): details

 

Fast updates: 

Note that results you have previously produced using an older version of the AI can be updated with a single click from the right-hand sidebar in your scene.

Studio:

The AI update will be rolled out to RADiCAL Studio in due course (check announcements).

For many of our users, it is important that their results come with stable and realistic contact between the character’s feet and the floor. We’ve therefore been working hard on a solution we call “footlock.” 

 

We’re still working on it. To be precise, we’re right in the middle of it. 

 

But we’re now able to release an experimental version, in public beta, in the form of a post-processing footlock layer in RADiCAL Core. The post-processing layer produces decent, and sometimes great, results across a number of use cases. However, because the footlock layer may not work for all users across all use cases, we’ve decided to provide a choice: you can see your results both with and without the footlock solver enabled.

 

This footlock solver is available automatically for all scenes created in RADiCAL Core after December 8, 2020. It remains experimental and we’re continually improving it. To understand how best to use it, where it might do well, or fail, please consult our Learn section

 

Switching to footlock: open a new tab within your scene

 

For scenes created after December 8, 2020, you can choose to view your results in a separate browser tab, but within the same scene, with the “footlock” solver enabled.  To do this, click the “Switch” button within the “footlock” section on the right sidebar of your scene.

 

Your viewing mode is reflected in the “current view” status. To see the solver results, make sure that: 

  • Footlock: Available
  • Current view: On

 

 

FBX downloads: current view mode + file naming convention

 

The FBX generate + download buttons will give you the FBX file that corresponds to the “current view” mode.  

 

Once downloaded, you will know which version you’re looking at by checking for a suffix in the file name that looks  like this: _fl. For example: 

  • Conventional FBX:  gen-3-1-samples_scan-005 
  • FBX with footlockgen-3-1-samples_scan-005_fl

 

Available in Core – other products to come

 

The footlock solver is currently available as an optional view as part of the Core product (and certain Core API users). It, or a subsequent update, will also be rolled out to Studio – keep an eye out for announcements.

 

.  

 

After carefully considering feedback from our community and the industry at large,  we are excited to offer the Studio Creator package.

 

  • Use for free, no limits: anybody with a RADiCAL account can now install and use Studio to animate and visualize as much motion as they want, free of charge, no credit card required.

 

  • Download FBX exports – pay only for what you need: We’re also launching our pay-as-you-go system (PAYG) as part of the Studio Creator package. If you decide that you want to export animation data from Studio, you can purchase one minute increments of export time that you can use whenever you want. You only pay for the results you need. Left-over credits won’t expire for a year.

 

  • Download directly from website (we’re leaving Steam): Studio is now available directly through our website here and here. If you previously downloaded Studio from Steam, no worries, you can simply delete the Steam version and replace it with the version downloaded from our website.  Your results will still be in the folder where you saved them.

 

  • Annual Producer – lower price – unlimited FBX exports:  we’re dramatically reducing the price of our Studio Producer package to $250, featuring unlimited animation data exports.  Check out our updated pricing page here.

 

We are doing this in the hopes that all creators, regardless of resources, will be able to use RADiCAL Studio in their content pipelines.

 

No subscription.  Free to download.  Free to use.

 

P.S: many of you also asked for a Studio tutorial, which you can view here.

 

*                       *                        *

 

– Team RADiCAL

03 Nov 2020

Our friend Xuelong Mu has created a video tutorial on how to livestream your real-time results into Unreal Engine.

Check out the video here.

There is also a detailed readme in our learn section.

 

 

A quick reminder of the importance of a solid calibration to support best results. Below is some specific advice on what to do. There are smart updates in the works to make all of this easier (more detail to follow in the next few weeks).  

 

Summary (tl;dr): 

 

  • Do this: make sure the actor starts the scene, from the first frame onwards, in a T-pose facing, and in full view of, the camera, at the center of the scene (stage), with the entirety of the body visible and feet firmly planted on the floor, for 1 – 2 seconds.  
  • Don’t do this: we mean “starting” your scene with a T-pose literally. Avoid footage prior to the T-pose in which the actor prepares for the scene. This includes the actor “transitioning” into the T-pose, such as walking into the frame, standing in profile, turned away from the camera, or otherwise anything less than a solid T-pose. Trim (edit) those frames out of the video, if you can.  

 

RADiCAL Core calibration:

 

We currently accept, without warning, videos in which the actor’s presence is visible enough, but falls short of a solid T-pose. We made that decision because many of our users would prefer to have some results, rather than no results at all. However, this also means that we currently don’t warn users when their calibration wasn’t supportive of best results. 

 

Therefore, you should carefully review how your video starts: if there is a solid T-pose in the video, great. If there’s no T-pose, try to find a natural, full-body camera-facing pose that comes close to it. Either way, try to trim away (edit out) any footage that comes before it. 

 

RADiCAL Studio calibration:

 

  • Step-by-step (SbS) processing: The Studio app already comes with a mandatory five-second countdown to assist you in finding the best position before the Studio app starts recording your video for SbS processing. Make sure the actor stands in a solid T-pose by the time the Studio app has completed the countdown and starts recording.
  • Real-time (RT) processing: Because there is no five second countdown, you‘ll need a person other than the actor to hit the START button. You can also experiment with external camera integrations (to ensure a better angle and distance) or keyboards / mouse configurations (to allow for remote operation of the Studio app). 

 

 

 

We’ve released plugins to make the re-targeting process faster and easier for Blender and Unreal users.

For our Studio customers, we have deployed a dedicated FBX exporter through our website: https://getrad.co/uploader-studio.  

Note that the Studio FBX exporter will only read animation data produced by RADiCAL Studio. You need an active Studio subscription to use the Studio FBX exporter.

For more information on how it works, check out the dedicated learn section for RADiCAL Studio.

Studio’s real time functionality opens up exciting possibilities for previz and virtual production.  For virtual production professionals and enthusiasts, we are enabling Unreal Engine 4 LiveLink access with RADiCAL Studio. We’ve described how it works in detail here (FAQs).

We have included a sample Unreal project and instructions on setting it up with Studio.  It’s available through this Github repo.

Professional Studio subscribers will have livestream access to all engines included in their subscription.

We are also developing livestreaming capabilites for Unity, Blender, and iClone.

Our latest AI: Gen3 

 

Today, we are launching Gen3, the latest generation of our AI for our community of creators and developers. 

 

Gen3 has been a labor of love, skill and persistence. We’ve been on it for more than a year, because we knew that the Gen3 architecture would lay the foundation for a revolution in 3D motion tracking science. It is difficult to overstate how excited we are.  Not only does Gen3 provide far improved output, it also does so at significantly enhanced throughput.  So much so, that it’s now capable of being run in real time.  

 

With all those improvements now available, we’ll be releasing a range of new products, both in the cloud and for local (on prem) use.

 

RADiCAL consists of a small team of 3D graphics and AI enthusiasts.  We hope you enjoy the fruit of our labor as much as we do.  Below we have summarized just some of the highlights we want you to know about. 

 

Key features: 

 

RADiCAL is optimized for content creators, with the following priorities guiding everything we do: 

 

  1. Human aesthetics: because of our holistic approach to motion and deep learning, we’ve massively enhanced the human, organically expressive look and feel of our output, with smooth results that substantially reduce jitter and snapping; 
  2. Fidelity: Gen3 was designed to tease out much more detail in human motion than previous versions; 
  3. Speed: we want to ensure that our technology is capable of running in real time across most hardware and software environments. 

 

Going forward, Gen3 will support both CORE (our cloud-based motion capture technology) and new real time products (including an SDK) that we will announce and release shortly.  

 

While Gen3 has moved in massive leaps toward realizing those priorities, we also know that we have more work to do. More about that below. 

 

About our science:

 

There’s a lot of secret sauce in our science. But here’s what we can say: we’ve developed our AI to understand human motion holistically. Rather than creating a sequence of poses to create the impression of motion, we interpret the actor’s input through an understanding of human motion and biomechanics in three-dimensional space over time. In other words, our technology thinks in four dimensions: x, y, z and time.

 

We have more work to do:

 

As proud as we are of our progress, we want to do better in a few areas. One of our top priorities for the next few weeks and months is to better anchor our animations to the floor and reduce certain oscillations. 

 

We expect to roll out a first set of improvements within weeks, which should take us much closer to where we want to be in terms of reducing foot sliding  and oscillations. 

 

But we expect more work to be necessary after that. Those additional improvements will come with the next large release, in version 3.1 or 3.2.  We’ve already started to work on those improvements and we’re genuinely excited about making the results of our research public soon.

 

In the meantime, you can substantially mitigate these effects by following the guidance below.

 

How to get the best results:

 

To get the most out of our technology, you should: 

 

  • Static, stable camera: place your camera on a flat, stable surface (or a tripod, of course). Don’t adjust the zoom while recording. Don’t cut between different camera angles. 
  • Single actor: record a single person at a time; 
  • T-pose calibration:  ensure the actor strikes a T pose within the first five seconds with the entire body being clearly visible at a frontal angle to the camera; and 
  • Aspect ratio: record, use or upload videos with aspect ratios not wider than 4:3.  That’s because our AI only processes videos in a 4:3 ratio. While you can upload videos with wider ratios (we’ll crop them back automatically), you should keep your actor inside the 4:3 ratio to ensure they don’t get cropped out.    

 

Play nicely, and you’ll get best results!

 

* *

 

As ever, we’re forever grateful for the support of the RADiCAL community.  We’re excited about feedback, good and bad.  We’re even more excited about constructive criticism and assistance.  

– Team RADiCAL 

We’re excited to announce that the very talented CG Geek has released an easy-to-follow tutorial on how to use RADiCAL to animate characters in Blender.

In this video, Steve takes you through the entire process from start to finish on how to animate characters using RADiCAL FBX output in Blender.  He starts out by filming the scene, mapping the character, retargeting the animation, as well as rendering the character.

Steve’s video (and many more that are great) can be seen here on his YouTube channel.

Steve’s video comes on the heels of massive improvements in our product.  With our impending Gen3 release, we will be releasing a much improved skeleton with plugins for Unity, Unreal, and Blender.

For our Blender users, we are now also releasing a standard T-pose rig that will make the re-targeting process easier. You can currently download the t-pose file here, and it will soon be available through our download page as well.  Steve describes how to use the T-pose rig in his video.

Also, as a reminder, if you’d like to share your animated characters with us, we’ll post and promote your content on our social media channels.  Be in touch!

 

*          *          *

-Enjoy!

Team RADiCAL