Episode 4:


CamX, on AR (Augmented Reality), EVITA, and Mo-Cap

Twitter

In Ep4, Cam opened up our eyes to the world of 3D and Augmented Reality (AR). Given that all of RTFKT's creative assets came in 3D, we weren't exactly unfamiliar with it, but still, he wanted to show us the endless possibilities we can to connect with our digital avatars, even do a live motion capture with it, almost in a Ready-Player-One-ish manner. 

Cam also shared with us his motivation in creating EVITA app, which was an iOS app that started as a passion projected, meant for Clones to create AR content with their digital avatars. Today, Cam has officially decided to call off further development with the app. We wish him all the best and hope that this is just an interlude and there will be more to come. 

Introduction: Cam's Journey From Developing AR To The EVITA App?

M: Can you tell us more about your background, your work, and how did you learn all these AR stuff?

C: I'm a XR developer, creative technologist, and Head of Product at Really. After graduating from USC, I first worked at 20th Century Fox Innovation Lab as an interactive engineer, building demos for new consumer experiences through AR/VR. Later, I shifted focus to mobile AR for location-based experiences.

But on the side, I've sort of carved out some time where I can explore Web3 and sort of take my skills there and weave them in where I can. I think most of my time is spent with  this CloneX community. 

I got into this space because I saw how much potential there was for 3D avatars to be used. One, in augmented reality, just because that's what I was familiar with. And two, how 3D avatars can be used across different game engines like Unreal Engine, and Unity. To have a project that puts out 3D files the way RTFKT did just made so much sense to me. 

I saw the potential to build your online identity around it and put all these technologies to use. So I've been experimenting with technologies and sharing my experiments. That's my contribution.

M: With all that concept of avatars, filters, digital identity. And I think that one concept that you brought and you dive a lot into is that digital identity. What does it mean for you? How do you define it?

C:  Yeah, that's a great question. Digital identity for me right now is having an image or just the character, the person that represents you online, and being able to use that across everything. Like a Twitter PFP is obviously one instance of your digital identity. But then to be able to take that character and use that character in a Twitch stream or in a game or online, having that interoperability, that's what really, really excites me – to have just one identity across everywhere.

For me, the 3D side of it is what's really, really cool and the technology is just enabling more and more. There are a lot of people doing VTubing, virtual YouTubers, they've been putting these game engines to use. Then there are the cool tools that are becoming more and more accessible and user-friendly so that everyone can take their character, stream as their character, and create content as their character. 

Real quick, I told Beetle I wanted to do a parallel demo of the Twitch stream character in this Unreal Engine environment that a friend of mine built for me. So I'll post a tweet, you guys can check it out, and I'll try to run it simultaneously to the spaces just so you guys can see what I mean by 3D characters and environments and streaming. Because it's really cool. And shoutout to Pending Reality who I think is in here who built this tool for me to use.

M: So that's cool because I think that's one of the things we wanted to achieve as well with Beetle when we brainstormed about the series of spaces we're doing to showcase creators and dress to fuel our own curiosities. You know, Twitter space is basically just a radio chat and Twitter is not really 3D, it's 2D as well. And it feels like it's still a little bit hard to do all that integration of let's say a platform that helps you to stream. What do you feel is the one technology brick that we are still missing or which is still a little bit further ahead in time that will allow us to really stream in, stream out super seamlessly with our ID?

C: So I think the biggest barrier to putting your character to use like you want it to is interoperability. Just having a drag-and-drop solution where you can just take your avatar and use it across the board. The solution to that is out there and some people are probably really familiar right now, but it's the VRM file format which is like a standardized avatar file format that lets you have one file that you drag and drop and use on whatever platforms or tools are built around VRM. 

You're starting to see it and that's what this streaming app that I'm Twitch streaming through is built so that people can drop their VRM file in and use the tool without doing any extra setup. So if people begin to adopt interoperability in that way, there will be just a smoother ramp to using an avatar wherever you want to use it. 

The second piece is making sure that the VRM avatar that you're using is optimized and ready to be used on all these other platforms. I think the biggest barrier for our community right now for clones is that the facial blend shapes that are used to animate the mouth and animate the eyebrows and blinking. Those are not really set up very well so there's some extra work that has to be done in Unity or in Blender to make those look good. So if you're trying to use your clone for streaming, I would recommend getting a good VRM version of it and then exploring the VRM tools. Because once you have this good VRM file, you can access everything built for VRM. 

M: So maybe to jump a little, there are two questions I wanted to ask. First, there is that parcel that you received by post a few days ago, that mo-cap thing. So I have no shame to say I had no idea what mo-cap was. And then I saw you putting it up, and it makes sense, especially now you mentioned that you are coming from the movie industry, movie collectibles. So tell us a bit about that piece of device, because I think that's something we have all seen in the making of big movies. Why did you buy it? What is your intention behind it? What does it allow you to do that you didn't do before? Just curious how do you use that new tool? 

C: Yeah, definitely. So motion capture is just a way to capture your movement, capture your motion, and use it. And whether it's 3D software, like game engines, or like you were saying, for movies, for animations, it's a way to just simplify animation, human animation. 

It's not affordable. It definitely hasn't been affordable in the past. That's sort of why it's not this tool that everyone has and everyone puts to use. Because if it was, I think everyone would love to have it in their tool set. I've always wanted to explore motion capture because one, it lets you control your avatar in real time and really customize the movement. And take your movement, take yourself, and put it into that 3D avatar. So it's something that I always just try to delve into and see what the latest ways to do it are. 

The first sort of piece of hardware that I was able to get, is this Sony Mocopi device. It's a new motion capture device that was made with a lower price entry point, and made specifically for consumers to just get a taste of motion capture. That was a really cool device, but it wasn't necessarily accurate enough to do good-looking animations and to do hand tracking, which was the other piece that was really important to me. 

The hands are so expressive. And this device just didn't provide a solution for that. So I always, always, always wanted and just dreamed of buying a motion capture suit, which is the expensive solution and the higher quality solution to motion capture. There's this brand, Rokoko, which I think is probably the easiest entry point to the suit solution. And I irresponsibly just bought one. 

M:  I was watching on Google at the same time you were talking about the Sony Mocopi. So it's like small tabs that you put on the ankle, wrist, and head. And I think I like what you said because coming back to the digital identity and what makes one feel that if you see a virtual entity like a 3D clone, what will make you feel it's lively, emotional enough? 

C: The suit is a combination of obviously this full body suit that has trackers at the arms, wrist, legs, has trackers all across the suit. And then in combination with that, the thing that allows for the hand tracking is a pair of gloves that you can actually buy separately. 

And then for the face, face tracking is actually something that everyone can get started with. It's done through, it's typically always done through an iPhone using a program called Live Link or Face Capture. The iPhone, that front-facing camera on the iPhone is the same one that I use like the filter, face filters in the AR app. It does a really good job of tracking. So all these programs are using the iPhone camera, and that's the solution that motion capture brands like Rococo are using for their app. And it's also free. So if anyone wanted to get started with like VTubing, or streaming, I would just start with the iPhone, start with the camera, and see how to do that face tracking. And then you build from there. That's sort of what I did like a year ago. 

M: At what point do you feel that you can live in a little box, a little studio with your mo-cap and you just order food from Uber. And then you just live your life in a mo-cap suit and you have all these layers of animations, of clothes, where you go to work remotely. Because anyway, we are all working from our home office now more or less. It feels almost like a virtual civilisation right?

C: I think VR chat is a popular space where a lot of people are living their lives, their social activity, doing social activity in VR worlds, and going to VR nightclubs. And VR chat does a really good job of building these immersive environments that I think it feels like, it really feels like Ready Player One. So there are places where you can do it in terms of just being everywhere and being mass adopted. I hope it doesn't happen soon. I still want to go outside and still do things. 

But I think hardware like the Apple Vision Pro is really, really making inroads into that world where there's a digital layer over your everyday life. That headset won't be mass adopted; it will be for developers in the near future. But the fact that it's something that lets you layer content over your world so easily with like pass-through video and it's something by Apple, like Apple is stepping into the space to create technology that does this. That just means it's going to get there one day or another. So I think I would pay attention and watch what Apple does closely and I think you'll have a better answer of how close we are to a full digital life.

Creating EVITA App – An AR-ready iPhone App for Clones to Create AR Content

M: So you've built an app called Evita, right? Where you help us to play with our Clone files and start to use them in different contexts. I'm curious, what is the use of this app, how did you design that and what's your goal with the app?

C: It's been like a year of working on that app, which is really crazy to think about. But the initial idea behind that app was to build something for myself. I really just wanted to build a tool for myself that let me do more with my digital avatar. I wanted face filters that look good. I wanted to use my clone in AR and see it in the world. So the purpose of the app was just for content creation. It was about creating something that lets you really take advantage of these 3D files that RTFKT provided and giving you tools to build content around who you are. 

So the one thing that I know how to do well is build AR apps and  I can share this with clones and we can all make better content and we can all be the shining pinnacle of our digital avatars in Web3. I wanted to get people to see what's possible with your clone, with digital identity, and start to create content as a digital avatar. I was really happy just seeing how much use it got. There's Will who does SneakerCon. He was always doing videos with his clone. And then Tommy who always put it to use too. 

There's been a lot of onboarding. It sort of grew way bigger than I ever intended. I think it's at like 200 clones at this point. In terms of where I want to take it though, I've spent so much time just like adding new clones and trying to just create something that works that I haven't really thought past that. 

For me, the whole EVITA idea was about while I'm doing this work to build a tool, I felt like I should build a brand or something that can exist in this shared world of RTFKT where all the builders should be theoretically building alongside what RTFKT and we're all creating this world together. So I just wanted to make sure that what I was doing also contributed to something in the future that may be able to exist in this ecosystem. 

And then, I wanted to give my Clone a little backstory. So that was like the third purpose: To have an avatar and give it any story you want. Storytelling is fun. I think it's like the most fun element of having any characters, like just being able to create a story around it. And storytelling is what I wanted to get back to. So hopefully with 200 users, everyone all has their clones and you can build a game or build a story in this app. Or you can play this story, and be your character in this story. So that's sort of the different branches of what this app was and what it was in my head.


B: So you kow I love EVITA, and I have a few thoughts about the app is. First thing is, it's only on iOS and on TestFlight, right? So are you excluding Android users from it?

C: I know it's a terrible thing to do and I hate only building for iOS because I think there are more Android users out there than iOS users. The only reason that I haven't built it for Android are, one, the face filters. The Android phones don't have the same iPhone front-facing camera to do the better face filters that everyone really likes in the app, so I wouldn't be able to do the face filters. And two, I just haven't had the time. 

I've really been pushing the limits on some of the stuff. Some of the features in the app and just the quality of textures. I wanted everything to look good. I think I care more about things looking good than them being usable, as terrible as that sounds. I just wanted the best-looking AR face filters. So it's not the best approach, but it's been my approach. So yeah, I think iPhone 12 Pro is probably the lower end of the spectrum in terms of hardware requirement.

B: I love how the EVITA app is able to let people create content with their clones. But that comes with a challenge. So when people create art, create products with their clones, for example, usually there's a big goal they have in mind that they want to do something or represent the clone. So my question is, for people who are not artists, who do not need the clone to represent them in that sense, is there any incentive for them to use their clones? How would you encourage them to use their clones in content creation? 

C: I would say that the way to get people motivated to use their Avatar and do more with their Avatar, I think just dipping their toes in some of the possibilities like face filters. Face filters are such an easy on-board to trying out something that's really yours. You have this character that's the beauty of Web3 and digital identities, that you more or less own this Avatar and you own this character. So it just makes sense to put it to use where you can. 

So I would encourage people to just experiment with something small like that, try something. Because when you take the PFP out of this 2D realm and see the 3D elements of it, see how much work was put into getting these traits looking amazing and just having all the different angles that you don't see when you look at something on OpenSea or you just look at the JPEG, there's a lot of beauty and there's a lot of work put into the whole character. And being able to see those different perspectives is really, I think it's a valuable experience and I think it helps encourage people to want to do more. 

It births new ideas where you can see the ways you can use this character spatially, you can use it for as much as you want to do. I think that's the really beautiful and exciting part of having a 3D character. Once these tools are available to you, you can do anything with it. 


Content Creation And Monetisation

B: As we talk about creating content, and possibly partnering up with brands, I've a big question for everyone. What is it in it for these brands to use a clone, or whether they use any other PFP, as compared to say, using their traditional models and influencers?

S: So I think the biggest challenge and roadblock to our Clones achieving any kind of critical mass or monetization would be capturing their following first, and it will probably take a while to build that following organically, and as CamX said also, I think another roadblock to building that following is the sheer amount of resources required to actually deliver the amount of content, very often, consistently, to attract that following. So I guess it will be probably at least a year or two, or even three years before we can start attracting these brands.

C: It's a really good question, Beetle. It's like why should you use a clone instead of anything else? And I think a lot of the hope is that the RTFKT brand grows, and having a character in this Nike-attached world is attractive to other brands, but I think right now, in this present moment, we're not at that point, so I think what a lot of people have rightfully done is built around the likeness of their character, and not necessarily attach yourself to anything

For a good number of us, in a way we don't have a clear idea of like, okay this will definitely work out, but it's just the excitement of being able to try something innovative and be at the forefront of this, that kind of excites us to show up and just keep trying and creating content, right? So I think that's something that's common for a lot of creators in the Artifact ecosystem. As we are approaching 11, I'd like to invite anyone who is down on the stage, if you want to ask any questions, show your flowers to Cam, please feel free to come up. I've also invited... I'm in the midst of criticism. Yeah, I have criticisms as well. Get up here. So I've invited Pending up on stage as well, because I know he's helping you with the setup, right? I would love to hear what the challenge is in terms of setting up this VTubing rig, and especially different textures that you see, right? Can you guys share more about that? I'll just... like real quick,

B: As we round it off, and we have Pending up on stage to share with us a bit more about mo-cap, I wanted to ask, when you guys are talking about live motion capture with the Rokoko suit, how does that compare to record without the suit, and then use Wonder Dynamics to roll it over? 

P: So Wonder Dynamics is more of an offline authoring process. You take your video, you do something in it, you send it to Wonder Dynamics, they do some AI-based editing, and you get your 3D avatar in that video that you took. 

Now, but the thing is, what if you wanted to do it in real-time? What if you wanted that to interact with other people on a stream? What if you wanted to take that technology and throw it into a 3D game world where you can motion capture, and role-play all at the same time, right? Because that's where the future is headed. 

Wonder Dynamics and a lot of this offline stuff is like you have to send the file, wait for it, get back a video, and if you're a video editor, that's fine. But I feel like the big thing with all of the tools that are getting easier to use now is the biggest hurdle for people to be creative is the creative tools that have existed up until now. You have to learn it, you have to read tutorials, you have to do all this stuff just to make something that looks cool. But the whole idea with building this stuff was basically to say you don't need to figure all the nitty gritty tech. I spent a year and a half figuring out this and bashing my head against the wall. Now you can just bring your clone from Clone Tools, and export it as a VRM, maybe there's a little tweak involved, but throw it in, and then you can have super high-quality ray-traced real-time animation, basically.

One thing that's really cool about motion capture, or just I guess just motion data in general, is that where the motion's coming from doesn't actually matter. It could be from a suit, from a webcam, it could be from an AI app. The cool thing is when you actually can replace motion with AI, for example. So in an upcoming build, which will be public and free for everyone to use, you're going to be able to plug in your brain, basically edit a brain and let that drive the animation, the facial animation, the expressions, and the personality of your clone

M: It was really amazing to understand your vision, to see your progress, and to see also I think the level of demand and excellence you put. It's really frankly super commendable and a great example for all of us. So yeah, I just can't wait to test more a few of the tools you have mentioned. And as you say, I think starting with small steps and doing already the face animation before doing the fingers is a good way to ensure we're not overwhelmed when we are new to this face.