Experiential Activations and AI [Part 1]
In the technology department at Fake Love, it is our job to anticipate certain trending technologies and determine how they can be applied to the type of experiential activations that we do. We’ve seen a lot of technology rise (and fall) over our time in experiential — projection mapping, Microsoft Kinect, VR, AR, drones, etc etc. For the past couple years, artificial intelligence and machine learning (a subset of AI) have become the latest buzzwords — and this year the hype seems to be hitting a new high.
AI and machine learning are very vast concepts that are touching nearly every other technology these days — voice assistants, chatbots, recommendation engines, self driving cars and your everyday run-of the-mill Killbot 5000.
The challenge we face as an experiential industry is finding ways that AI can be used effectively, and meaningfully in the type of work that we do.
As AI tools become more accessible and the underlying concepts are more digestible, you start to see more and more requests to basically “Just put some AI on it!” like it is a delicious sauce to be slathered on everything.
However, we still have a lot of work to do to get AI to a point where it’s really contributing meaningfully to experiential. It’s not a flash in the pan technology like some others — it is here to stay — so we gotta treat it well.
In this write-up and part 2, we’ll talk about some of the potential AI has, the big challenges facing its use, and what the road to improvement looks like. Again, AI is an incredibly broad concept, so I’ll admit up front that this is a narrow view on the whole thing, but I’m going to try to highlight some things that I see as broader lessons for experiential design, advertising and interactive installations.
(By the way — a primer on AI and what it is and how it works and how it will change everything has been covered in countless other articles [LINK, LINK, LINK, LINK, LINK, LINK ]so we won’t go into crazy detail here.)
What can we potentially do with it?
AI has a lot of potential in nearly every business sector, but a big chunk of that research might not trickle down into useful applications for experiential for quite some time. Self driving cars, content recommendation engines, smart devices, medical applications — these are just a very small fraction of use cases that are driving research into various areas of AI. Experiential as an industry kind of has to wait for big movements to be made elsewhere before something is proven or digestible enough for common use. Applying AI to problems from scratch are the types of things people might spend years getting their degrees in — it can be difficult to synthesize entirely new approaches by attaching bits of AI together like Lego bricks. While this is totally possible for certain studios and development shops, the vast majority of us are going to be reaching for things that are at least partially tailored towards a certain outcome. Of course, this all depends on your timeline and budget.
If you’re looking at a project that might require custom insights into data, you might be going down a more complex and risky path that would need a longer timeline.
Projects that need to apply AI from scratch or require custom training on datasets usually need things like this:
- You need a hypothesis or goal about what you’re trying to pull out of a particular dataset
- You need a large amount of data — text, images, videos, user preferences
- That data needs to be organized, cleaned up and standardized — sometimes even labeled or classified by hand
- You need to train an ML model based on that data
- Ideally, you have a way to check if the results you get from your model represent a meaningful truth versus an outcome that only seems to be true in a very small parameter range.
- You’ll likely need to iterate over the above a few times before you land on something that really hits the need well
For the above, the from scratch approach may come up if you have a client who has some data they can share with you but they want to apply it to another dataset that may or may not already exist.
If you don’t have the time to go back for a degree in advanced mathematics or the expertise to start building your own tensorflow modules, your team will likely be reaching for things that have some pre-existing applications that have already been more proven out. Here are some broad strokes of what i’m calling “off the shelf” applications of AI — they could also be broken up into categories of detection, prediction and generation.
Off the shelf applications:
(Note: Medium doesn’t support nested bullet lists…so this list looks weirder than it needs to be..)
- Classification/Regression/Clustering — finding patterns in existing data and trying to classify or group things together OR trying to predict where something falls based on relative data
Image Recognition — this is large umbrella:
- Object detection (Related: ImageNet,YOLO, or Amazon’s Rekognition)
- Sketch/Drawing detection (Like Google’s Quick! Draw!)
- Optical Character Recognition(OCR)/Reading text (Like Tesseract OCR)
- Face Detection or Recognition(That’s a human face vs That’s Rhianna’s face)
- Facial expressions (primarily markerless detection — examples include FaceTrackerand Visage)
- Person detection (i.e. Kinect Skeleton Tracking or Pose estimation)
Image Creation/manipulation — these are very related to image recognition because in some ways you can’t have one without the other.
Language/Text analysis or Generation
- Predictive text (like most smart phone keyboards)
- IBM Watson Personality Insights
- Semantic Analysis (Teasing out context. i.e. These words are more positive or negative)
- Content classification(“This article is about X”)
- Entity Recognition (identifying things like location, times, subjects, etc)
- Natural Language Processing
- Search and recommendations
- Voice control and Speech to text transcription
Music and sound recognition and generation
Gameplay and Physics using Reinforcement learning — depending on your perspective and application, these may or may not be considered as “off the shelf” as others
- Creating a model to play a game on its own (like playing Super Mario)
- Creating a model to play against a human player (like AlphaGo)
- Modeling physics like walking behaviors or complex system interactions like traffic systems (Walking demo from Google Deepmind)
And probably a million other things I’m forgetting
Pros of off the shelf tools:
- Easier to work with without needing an advanced math degree
- Better for a shorter timeline project (a caveat though: some of them still require a considerable amount of effort or knowledge to get working — they aren’t all plug and play)
- Already trained on very large datasets, something that you may not have the luxury to access on a shorter timeline
- Saves you the difficult work of finding data and training your own model
- Some models are hard or impossible to customize for a custom use case. ImageNet did not work as well as hoped for Projects like Hotdog/Not Hotdog and they had to go their own way for more reliability and mobile compatibility
- Using API’s from other providers can be expensive if a project is running long term
- Because most of these operate as a cloud service, they can be challenging to use offline, or on mobile devices with bandwidth or compute power limitations
- You can be limited to specific frameworks or programming languages
This article makes the point that most of machine learning these days is taking input A and trying to approximate output B and “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”
The above tools are great for their defined use cases, but it can be a hard road ahead to try and mix two or more concepts together. Coming up with new combinations is where creativity and really solid knowledge of all concepts involved really come into play, and we’re still at the early stages of that. Tools like Lobe.ai are making this experimentation a bit easier.
For example, deep dream and style transfer can look really cool but there aren’t a lot of artists and technologists pushing it past what it can do already. Just stopping at “combining two images together” is a neat trick that people will be impressed by for the next few months. The question is: does it have value for creating whole new aesthetics or new works of art that don’t look like blurry fever dreams.
Combinations of different AI research is where we can come up with interesting applications for experiential and interactive installations. The following ideas aren’t necessarily interesting, good, or smart — but they do poke at some of the edges of things that aren’t easily possible yet or would take quite a long time:
- Speak to an AI bot that then coherently renders images or animations of what you’re talking about on giant projections (combining speech to text with natural language processing with deep dream)
- Plug in your instagram profile to be shown people who have very similar photo aesthetics and interests to your own (massive privacy concerns aside, you would probably have to be Instagram/FB itself to be able to have access to the datasets necessary do this) or generate new photos that would fit right in to your collection
- Hum a tune and speak some lyrics into a microphone to get a custom generated musical track and music video
- Create a police sketch artist chat bot that allows you to talk to an AI sketch artist that creates a fictional person’s image based on a conversation you have with it
- Detecting speakers in a room and visualizing their speech based on their perceived gender, emotional state, and conversational content (this is still a few years out)
- Dancing in front of a camera and having the computer try to guess what type of dance you are doing OR try to have a digital avatar move along with you and create custom music while they dance. Bonus points if you randomly generate an anime character as the avatar (there is a whole community of people looking at using Anime imagery and machine learning — here are some examples)
- In a public setting, have a camera identify or recognize your face from a database and relate it to preexisting or predicted information based on what it knows about your facial structure and fashion choices. There are already companiesthat can loosely guess age, mood, skin color, and gender — but we have a ways to go before those models start trying to guess ethically questionable things like sexual preferences, income levels, and heritage so that information displayed is more precisely targeted.
Unfortunately, I don’t have all the answers for the most interesting directions to go in for combining today’s tools, but they are definitely out there. We just have to keep mining for more and more interesting stuff, doing the research and trying things out.
Here are some examples of branded experiential or performance projects that use some form of AI or ML and try to combine them with other technology. These vary in quality and depth of AI usage, but it still gives you an idea of what might be out there:
There are a few other examples out there that use AI in various ways, but its clear that finding valid use cases is still a work in progress. In Part 2 of this writeup, we’ll talk a bit about some of the challenges facing the use of AI for experiential activations, and how we can improve moving forward.
Part 2 is continued here! [its a bit longer]