Thoughts on Virtual Embodiment

hammer-tusk-SSFCaOyI1X4-unsplashI pop on my Oculus Go headset and venture into a different world. I’m surrounded by generally friendly, if odd-looking companions. Most of the robots or aliens or anime people just float by and ignore me. Others are bold to strike up a “conversation” by voice or a text message. It’s really awkward for newbies, and it has always remained a kind of odd novelty to me. Full-disclosure, I’m old, enough. But each adventure in worlds like AltspaceVR leaves me both intrigued and thinking deeply.

How should we think about different ways of relating and communing with each other, especially when using technology? What are some implications and questions that we should be asking in relation to innovations like virtual reality, online communities, etc.?

I’ll use an academic-sounding term for this: technologically mediated communion. We all know what this is like as we do it every day. It’s when we talk on the phone, message each other, make video calls, play online games in real-time using different communication tools, etc. There is a technology in-between us. It links us, but alters the communion in some way.

I think we can all agree that technologically mediated communion/relationships have practical value to provide a level of connection and experience that is preferable to no connection at all. I spend many hours a week in such mediated communication with team members, family members, etc. If we live across the ocean from each other, I value the chance to make the connection.

However, we must acknowledge that a technologically mediated relationship is missing important elements of true human connection. We are not present, together, in physical space. It is always “virtual” in some way.

Here’s a trick though, as technological tools increase in complexity and sophistication, it can feel to us as though our communion is becoming more real. This is where I want to begin a real conversation; the point where we begin to consider these mediated relationships and communities to be “just as good as” or “good enough” or even “the bright future” for us, especially if we’re talking about communities of faith and expressions of Church.

sky wires

How might we describe phases of experiencing “reality” apart from reality?

Let’s look at some steps of technological progress in our communication methods (with gaps, to be sure):

  • For millennia, we have depended on written communication. This gives us a very low sense of real presence of the person with whom we are communicating, but we communicate.
  • The inventions of auditory communication (radio, telephone, etc.) gave us an increased sense of presence–we can recognize the voice–but only a single channel of real communication.
  • Audio/Visual communication (video calls, Facetime, Skype, etc.) increase the sense of presence even more. More channels give us things like: 2D visuals, auditory signals, and some limited perception of body language.
  • Simple VR – through avatars in virtual worlds (AltSpaceVR, games, VR meeting technology) provides a crude simulation of reality, but even less sense of real human presence. We intuitively feel the disconnect (unreality) when we interact so our minds must decide to play along when we’re talking with our friend who is represented by a CGI human, an alien, animal, etc. In actuality, we’re back to more auditory cues to give us a sense of the real person we know. We get very little else.

As VR technology progresses, we will certainly see things become more “real” in appearance. We can expect more and more lifelike avatars and worlds; the gaming world proves this. It should give more of a sense of personal communion, but our brains always know it’s not real life. Our brains are hard to fool. We perceive the artificial space, motion, physics, environment, etc.

  • What about the addition of other sensory technologies, which will certainly come?  We’ll have tactile feedback through simple things like gloves, or even whole-body suits that attempt to mimic physical touch, though solving problems like mass of objects and normal physical motion through spaces will likely remain issues. Again, we’ll likely experience some increase in “believability” but likely never approaching the level to really “fool” our brains. Will it feel the same as really being with a person? Not really, but it may be a fun and practical substitute in some circumstances.
  • The next, and ultimate logical step is direct influence of our brain’s receptors through technology. We can imagine nanotechnology implants that are essentially imperceptible to us in a physical sense – no goggles, suits, etc. These may be able to stimulate so many micro facets of our brains that we would be unable to tell the difference between a real experience and something virtual. It’s extremely difficult to fool our brains and bodies, but it seems plausible to create a really realistic “dream” that might even remain clear in our minds.

Where could that take us? Is the next logical extension to be lives lived in dark spaces without any actual physical movement or interaction in the real world? We could be “living” completely in our minds and perceptions. Some may argue in favor of the ultimate extension of this – the abandonment of our physical bodies entirely – prone as they are to fatigue, disease, and limited in time and space. If our  brains are still required for consciousness (?) we could imagine truly dis-embodied tissue being sustained through some means as “all we would need” to be alive and experiencing life. Transhumanist ideas already posit this ideal. Many works of art speculate about these futures: The Matrix, Bladerunner, etc. But these tend to be dystopian visions of a future where artificial life forms, relationships, and virtual lives are not held up as ideal.

When I’m asking questions like these, I have to go back to the deepest levels of my worldview. Do we know what ours is? Can we trace an evolution, or a logical progression of a worldview that is reflected in the full embrace of virtuality?  These aren’t new ideas, for sure. Buddhism posits our world as an illusion. The early Church fought against the influence of Gnosticism which, among other things, held that the material physical world was evil and the spiritual world was good.  I’d suggest that, if our theology resembles Gnosticism, Christian Science, Buddhism, and Transhumanism, we will have few objections to raise with the direction of any technological developments, even in the evangelical Church.

However, if our theology and worldview follows the path of historic orthodox (small and big O) Christianity, then we must push back and question these developments and their proper place in our lives. Christianity has always held to the view that God created the physical world as good and that we, as created beings, are being transformed into the original image (ikon) rather than being released as pure spirits/minds in the End. Jesus rose with a body; he wasn’t a ghost. They’ll be different, for sure, but the Church has always taught that our bodies will be resurrected, not just our minds/spirits.

God created human beings to be in communion with Him, and with others, in a physical created world. A Biblical worldview affirms and assumes the goodness of the created world and the integrity of human beings. We are made in the image of God: body, soul, spirit. None of these “pieces” is optional or disposable. Each is important and has a role and healing in God’s redemptive salvation plan.

If this is true, it leaves us with some questions:

Are technologically mediated relationships and communities an end goal for which we should be pushing?

Is a virtual experience of worship, community, and relationships, as good, good enough, or the fullness of what God desires for us?

Or is there another place for the wise use of these kinds of technologies, one that helps to overcome some practical limitations, but one that doesn’t become an idealized end for which we strive?


Top Photo by Hammer & Tusk on Unsplash

Second photo by me.

Mobile, In Reverse

My daughter, like many young creatives, invents her own workflows for digital photography – shoot on DSLR, edit on mobile.

Reverse Mobile DSLR shot crop

I teach mobile production – that is, creating content on mobile, for mobile. But I’m also a traditional photographer and filmmaker – that is, making things with film!

This week I was struck by how creative people solve problems in very personal ways. I noticed this with my own daughter, who is growing to be quite a talented photographer.

But she does stuff backwards! (At least, from my own training.)

I’ve been teaching her how to use a DSLR to expand her photography beyond her mobile device. She has learned how to take really interesting photos with her iPhone 7, but has been a little intimidated by my old Canon DLSR with a big lens.

That is, until she really got hold of it. Now she’s a maniac! She is shooting a lot.

So what’s the next thing I want to teach her? Well, I’m thinking about putting a copy of Photoshop and Lightroom on another computer at home so she can learn those programs and learn to edit her images, “the right way.” But she doesn’t wait for me to figure out my licensing on AdobeCC and all that…

She invents her own workflow: DSLR to mobile.

This is how it goes:

  • shoot using the DSLR, capturing on an SD card
  • The SD card goes into the Mac Mini
  • The photos import into Photos App

So far, so good. But, instead of needing Photoshop (or Affinity Photo, or…,) she just:

  • emails them to herself at the original size, saves them to her Camera Roll on the iPhone, and
  • edits them in the free Snapseed app.

She is already comfortable with that app, so why mess with something hulking like Photoshop?! And, Snapseed has many more fun features for processing photos than boring old Photoshop.

Butterfly in Motion
A butterfly comes in for a landing on summer flowers – photo by AK

She makes really interesting, really good photos.

The things we old(er) folks can learn from our intuitive younger counterparts!

Do you have a backwards story like this?

A Few Thoughts on “The Limit” – New VR film from Robert Rodriguez

I’m looking for a narrative VR film I can really love and want to watch over and over again. Could it be, “The Limit?”

“The Limit” is Robert Rodriguez’ latest entry into the world of narrative filmmaking using VR technology. Released on November 20, 2018, it’s an action film that takes the audience on a brief journey to find out why some bad guys are chasing “us.” It’s not much more than that. It’s fun, but in the end not very ambitious as storytelling. Rodriguez, joined by his son, Racer on this outing, isn’t known for his thoughtful character dramas, but for action and trying new things. This film reflects those values. It’s a non-stop action sequence, with only a few moments of relief. For action fans, it may be the next thing. Or?

Let’s Talk Story
The story begins with “us” (the viewer) sitting in a bar, and meeting M-13 (Michelle Rodriquez) who is obviously a badass waitress. The filmmakers take little time to set up anything, but we quickly learn that we can’t speak and have some kind of AR “enhancements” that enable us to see bad guys. Other than that, we have no idea why we’re here. If you read the synopsis for the film, you’ll discover that we are “…a rogue agent with a mysterious past…” — whatever.

We are quickly forced to flee said bad guys with M-13. After that, we mostly get shot, help drive her Jeep, and suffer repeated blackouts after bad things happen to us. But, for some reason, we don’t seem to die from anything we suffer, including a gunshot to the stomach and a freefall from an airplane. Guess that all needs some explanation, which M-13 gladly gives in a long static monologue that tries to fill in a few details in an attempt to convince us that this all matters for some reason. Oh, we’re kind of alike. And now she’s got a plan and goal for herself. Finally! But it mostly involves us walking in to a poorly defended warehouse, killing the stupid henchmen, and confronting the Big Bad Guy (played by Norman Reedus of Walking Dead fame.) He wants something we’ve got, of course. A kind of slow, pitiful chase ensues. We have another blackout. But then there’s a twist ending. You get the idea. Oh, and the story will continue. Theoretically. If anyone would shell out for more. I’m not convinced they’ve given us anything to hope for. Michelle is badass and they treat her pretty well in terms of not really playing on her sexuality. Points for that.

Ultimately, as with many films in this genre, the story suffers for the sake of the experience. Why does this have to be? I’d call it more of a ‘ride-along’ experience. We (the audience) are immersed in the action but are almost completely passive characters, only taking initiative for a moment toward the end. So, we get to watch M-13, the real Protagonist, go through her journey that our appearance at the beginning seems to spark.

The problem with this, and many stories, is that we have no reason to care. We don’t know who we are, or who she is, other than a little backstory and a twist at the end. The stakes aren’t even that high for most of the film because it seems that we can just get shot, survive a car crash, a freefall from a plane, etc., and it’s no big deal. Guess “we’re” pretty badass, too. But “we” don’t say or think much of anything the whole time.

On the medium and techniques used in “The Limit.”
Just a note for would-be viewers; it’s not full 360 VR, more like 180 in a custom “Surreal Theater” that does fill in the other 180 with a dark cineplex, in case you want to see how cool it is to be alone in a dark cineplex.

It’s not an open world by any stretch. The directors choose and maintain our focus using the camera as in traditional cinema and it’s a completely linear timeline. The main difference here is the increased sense of immersion and some ability to look around a limited frame. I think it works pretty well, and it seems to me to be the best option for story-telling. If you let the audience just wander around, it’s hard to create a narrative flow and pacing. That works for exploring worlds in a game setting, but I don’t think we humans will lose our enjoyment of and desire for stories to be told to us. I’m not alone in thinking about ways to guide a viewer, using other cues (visual & auditory) to direct attention but without locking an audience’s POV one frame. But it’s a big challenge, to be sure. The directors do choose to pull us out of the POV a few times so we can watch ourselves drive away, etc. Also, they cut to insert shots that are from our POV, but are done in traditional cuts rather than “moving” us closer. It works fine.

Our character cannot speak for some reason. We communicate, very little, via some kind of text screen, but I can’t figure out where it may be located, if our body and face are supposed to be normal looking. At the beginning the logic doesn’t work. Maybe it begins to make sense later (spoiler!) when M-13 reveals that she’s also a biconically enhanced person. Can she see our communications in a kind of heads-up AR display?

The main problem with a lack of our ability to communicate is that the film is mostly a monologue by M-13. It gets really tedious when she has a long exposition scene where she puts the pieces together for herself and for us. Was this just lazy on the part of the writers, or an inherent limitation of the medium?

Final observation; my feeling is that running time on an immersive VR action film must be kept short. This film is really about 15 mins of actual narrative and Rodriquez made a good decision to keep it brief. Because the viewer is immersed and can’t control their point of view much, the intense action and motion will certainly cause some queasiness for many viewers. It did for me. I could never watch a feature length film without breaks if it is shot in this style. Maybe with some downtime scenes? I’m sure they took that into consideration, but it’s something for all of us to consider if we’re planning a VR film. On that note, an immersive story without all of the intense action is likely just fine for a longer run time. Then, the challenge is to have a real story. And, does anyone want to watch a ‘talky’ character drama in VR? Perhaps?

Other notes: I watched the 3D version on an Oculus Go headset. The film is delivered as an app from the Occulus story and includes a lot of behind-the-scenes material that I think will be fun to watch. The app download is pretty big, over 3GB,ut it’s not a problem to me. You can watch this sitting in a chair as the film is not a full 360 experience.

Why Use a Director’s Viewfinder? – A Tutorial in VR

Here’s a quick explanation of how I use director’s viewfinders – either physical finders (like the Alan Gordon Mark Vb) or smartphone apps (like the Artemis Director’s Viewfinder)

 

Note: I’m playing with VR/360video a bit more and getting my filmmaker’s brain around ways to use it for different kinds of stories. This isn’t really a story, but I got an urge to do a very quick tutorial in VR.

This hastily shot draft gives me ideas for the future. What do you think, does VR add, or detract from the experience? I’m already making my list of things I’d do differently, or add to the next one.

We’re all still learning here.

[Update: YouTube VR is now available on headsets like Occulus Go. You can watch it there, in Occulus, using this link: https://youtu.be/kayay5yl3nM ]

[Production Notes: I shot this with a very basic Ricoh Theta SC camera. I recorded the audio double-system, using a small smartphone lav mic plugged into a spare iPhone 4S sitting behind me on the chair. I synced the sound in Adobe Premiere CC and edited the clips there, exporting and uploading straight to Vimeo for hosting.]

Professional Video Apps for Android Devices – Do they exist?!

(Updated 17 February 2018)

I’m happy to say that, after a long time of lagging behind the iOS world, Android devices and apps are finally available to enable professional production on this platform.

And, because most people in the world are using an Android device, it makes sense to think about recommendations to help them, especially non-professional media creators, to choose the best tools for their video content.

For serious video content production, you may want to consider a video-specific app. While almost every general camera app allows you to choose still image capture or video capture (maybe even animated GIF and other formats) there are a few dedicated apps that only shoot video. I highly recommend checking these out, even if they add an app to your collection (who can resist just one more app?!)

Why would you choose a video-only app?

One reason is because a good dedicated video app will have controls that are designed for video shooting and you won’t get settings mixed up when switching between modes on a normal camera app. For instance, this can happen when switching between 4:3 aspect ratio high-definition still images and shooting HD video, which is typically 16:9 aspect ratio on your screen. Some apps choke a bit when switching. And, some apps are set to automatically start video recording when you switch to video mode. I can’t figure out why that’s a default feature (sometimes non-defeatable) but it’s a pain when you’re trying to shoot dedicated video. Ultimately, dedicated apps are designed for video shooting, and make the most of their interface and features.

As with still image capture apps, the image quality from these is essentially the same, if you shoot with the same settings (resolution, data rate, picture profile, etc.) The device hardware is really the defining and limiting factor. [BIG NOTE for low-end phone users; some of these apps are not recommended for you. Filmic Pro won’t even install on my Samsung Galaxy J5 (Marshmallow OS). My best advice if you have a low-end device is to go with a standard camera app, like Open Camera, that lets you shoot stills and video. No need for a specialized app.]

Here are three dedicated video capture apps that are the top of charts for me. I’ll review them in my general order of preference, but they’re all very good. Any weaknesses are generally pretty minor.


Filmic Pro ($15 USD)(plus in-app purchases for some specialized features)

Among mobile filmmakers and mobile journalists, Filmic Pro has long been the go-to app. It started life as an iOS-only app but the Android version is fully developed and gives you a full feature set for truly professional video production on a mobile device.

 

Filmic Pro gives you:

  • Full range of manual controls with a very friendly on-screen interface for manual focus and exposure control. Manual controls on all apps will be device dependent. They should work on most higher-end phones. On a lower-end phone, you still have manual control, but some features, like the sliders you see in the above screen capture, won’t be there.
  • Shooting aspect ratios for every popular format, including square format video, and UI rotation for simple vertical video shooting.
  • High resolution (including 4K) and data rates, depending on your device capabilities. Excellent video quality, including smooth motion.
  • Accurate frame rate. Other apps’ frame rates can vary widely, even if set in the app.
  • Format presets if you want to switch quickly between settings. For instance, a 4K setting and a FullHD (1080) video setting without going through many menus.
  • Automatic exposure and focus “pulling” for more professional control.
  • A continuous auto-focus mode that uses most of the screen as a focus area. This is great for gimbal and other handheld shots where you have a lot of motion in the scene.
  • App upgrades available for even more professional features such as picture profiles and LUTs. (Not really needed by mere mortal visual storytellers.)
  • Picture profile settings – these are advanced settings that professional filmmakers use to give them more control in editing their picture. Typically not something non-professionals want to use, as the video captured may not actually look very good until it is “graded” in an editing program.
  • Live audio monitoring, with on-screen meter, is possible with the proper adapters.

Weaknesses:

  • Lots of options that may be confusing to non-professionals.
  • Display can be cluttered if you have lots of features activated, but they are instantly accessible.

Cinema FV-5 Pro (paid version $4.50 USD)

Cinema FV-5 is a companion to Camera FV-5, which I’ve reviewed elsewhere. But it can also be purchased and used separately as a dedicated video capture app. As with Camera FV-5, the Cinema version is full-featured and well-suited to professional mobile production. If you want a dedicated app, I feel it’s the best choice for lower-end devices, even if features are disabled.

Things I especially like include:

  • Simple controls when first opened
  • Nice implementation of manual controls including a slider for focus and exposure on devices that have the Camera2 API.
  • Paid version (Pro) has full resolution and high data rate capabilities, up to your device’s specs.
  • The most common manual control I like to tweak, exposure compensation, is on screen all the time. Nice.
  • On-screen display of current settings.
  • Continuous focus mode is very accurate and useful.
  • Audio monitoring and on-screen meters
  • Professional features like interface customization, a histogram for video levels, etc.
  • Quick switching between features such as a histogram, stabilization on/off, etc.

Weaknesses:

  • No vertical video mode (interface rotation) – see my comments on other apps for my thoughts on why this is useful.
  • Locking and unlocking some functions, like manual exposure, could be simpler – a few too many clicks to release and re-set.

On a low-end device like my Galaxy J5 I found:

  • Some functions don’t seem to work consistently, like manual exposure by touch/hold on the screen. No change anywhere you put the “box.”
  • Of course, the manual controls change to meet the device specs. The capabilities are still there, just some controls get more basic.
  • Other device-dependent features, like 4K video recording, may be missing on low-end devices.

Cinema 4K (paid version $4.50 USD)

Cinema 4K is designed with many professional features for high-end devices. As the name implies, it can shoot up to 4K if your device can handle it. It is set up for more experienced shooters, but I find the controls are very intuitive and responsive.

I’m reviewing this app with a caveat. On my Google Pixel 2 (still pretty state-of-the-art as I write this) I get great general image quality. However, I see a lot of motion artifacts anytime I pan, tilt, or have a lot of motion in the shot. I’ve tried to track it down, and have decided it’s the app. My A/B testing with Filmic Pro on the same device shows ‘jerky’ motion in footage from Cinema 4K versus Filmic Pro footage, with the exact same settings. I’m willing to consider that it could be unique to the Pixel 2, so until I can do more tests with other phones, I’ll just leave it at that.

Cinema 4K gives you:

  • A very nice on-screen interface with common controls and settings clearly accessible.
  • Extensive, but easy-to-use manual controls, device dependent.
  • Very friendly settings menu that covers almost everything on one screen rather than many menus
  • Full range of resolutions and data rates for video files, device dependent.
  • Full screen, clean display when recording, but manual controls appear when you tap the screen. Nice implementation.
  • Picture profile settings – these are advanced settings that professional filmmakers use to give them more control in editing their picture. Typically not something non-professionals want to use, as the video captured may not actually look very good until it is “graded” in an editing program.

Weaknesses:

  • My biggest concern is actually image quality. Images without a lot of motion are great. I see a lot of motion artifacts in Cinema 4K footage when the camera is panning across a scene. I’ve done a lot of A/B testing and can’t fix it. I believe it is related to variable frame/bit rates used by most apps in mobile devices.
  • No vertical video adaptation. This has been a “rule” in the professional filmmaking world, but it’s passé, to me. Let us shoot 4K vertical video!
  • No live audio monitoring, or on screen audio meters.
  • Cinema 4K will install on a lower-end phone, but many features are disabled. I don’t recommend it for low-end devices (see my note above.)

 

 

Handling Antagonists in Social Media: Our Public Voice

It may be self-evident to you, but I have to remind myself that my response to antagonistic comments could be a powerful influence to everyone in my audience, not merely an answer to a hater.

angerIn many places where we seek to love and serve people, there are groups and individuals who oppose us and our message, no matter how much love we pour into our content. One friend of mine says it’s not unusual to get 90% negative comments on his posts that are intended to speak of peace for people in his region.

It may be self-evident to you, but I have to remind myself that my response to antagonistic comments could be a powerful influence to everyone in my audience, not merely an answer to a hater.

Imagine you’re in a public place like a shopping mall or university center. A person spots you from across the room as you are discussing something important with a new friend. They make a beeline toward you, yelling out angry comments and insults before they even reach you. It’s a tense moment. At least one person seems to be spoiling for a fight, right there. How would you respond?

Many of us are naturally inclined to avoid any kind of confrontation, especially a direct one like this. We would try our best to just back away, apologizing, fearful, and praying for others to intervene to cool down the situation. Others of us are very bold, risk-takers, and would step up ready to embrace a challenge (hopefully, with fists un-clenched.)

angry (1)In social media, I have encountered such situations. A number of years ago I was developing a feature film project in Latin America and I was using early Facebook and other social media to raise awareness of the project. A certain gentleman, who lived on the other side of the world, decided that we were evil people, exploiting the indigenous people, etc. (We were making the film at the request, and in partnership with, an indigenous group in the Amazon.) He didn’t know me, but he attacked me, and threatened to rally people in the country to shut us down.

Now, this was Latin America, so things never go according to plan in the best of times, and I had no real concerns that he could have any clout. My partner thought I should just block and ignore him. I thought I’d at least try to engage with him to see if I could convince him that we were OK.

happy copy

It became an interesting conversation for me, though I don’t think my arguments were very convincing for him at first. He did cool down and kind of disappear after a while. But, I did get to know something about him, his own past and personal issues that seemed to drive his anger. So, I felt it was fruitful. He never went further with his threats and actions, and it all basically blew over.

Ironically, a year or so later, he contacted me again. He was raising money for a project to help the indigenous group for which he was an advocate (in East Asia) and actually asked for my advice and help with his own project. It was a crazy turnabout, but I believe it was because I treated him with respect and tried to understand where he was coming from when he was attacking us. I pray for his project.

Of course, it could have gone much worse. Sometimes, and you may not be able to discern this in advance, it is truly fruitless or even dangerous to engage much. However, in this case, I felt it was worth it.

Now, in my story, all of this deeper engagement was through email, so it wasn’t public. However, if it had taken place in “public” on a comment thread on some social media site, I would have to discern the value of the engagement.

peace-talksMy theory is that, in a situation like this, where we try to have a conversation with an antagonistic person, our comments may be more for others in your audience who are “eavesdropping” on your conversation, than they will be for the person with whom you are conversing.  We can’t know if there could be some softening if our gentle speech turns away their anger.

I often get wise advice from people who say it’s not worth the engagement. But, as my original example in the shopping mall, I may also consider that there are many more people with whom I’m communicating. Anyone within “earshot” could also hear my arguments and my tone, and it could be beneficial to them as they assess just who I am.

Am I a good or bad person?

Does what I am saying in answer to common objections sound reasonable?

Do I sound like they could have a safe conversation with me?

This indirect communication could form an important part of someone else’s journey.

What do you think? Have you had this kind of experience on social sites? How should we handle people who oppose us in public? Are there principles or “rules” we should follow?

  • Tom

 

 

 

Image credits:

Icons made by [https://www.freepik.com] from www.flaticon.com

Icons made by [https://www.flaticon.com/authors/epiccoders] from www.flaticon.com

Icons made by [https://www.flaticon.com/authors/baianat] from www.flaticon.com