I am here to pour cold water on this fire.
My background: PhD in Applied Economics, and I work in banking/finance in quantitative analytics, though now its been renamed "data science." In my role I have seen hundreds of models that banks use to make decisions. I am also am an amateur game designer (can provide link to my free game if you want to double its downloads to 2). I built an AI system for strategy games.
They are garbage in, garbage out. They require massive amounts of data - those deep fakes still need real humans to provide the data. Real humans - get this - don't need other humans to imitate for basic things. Our brains are better at making accurate inferential guesses in situations with bad data.
Save this post - because I am going to make a prediction. Driverless cars will not be a common thing until we do lots of road infrastructure for "smart roads" that can help them out. Those cars you see driving themselves - they are tested on known roads and don't do well with sudden new things. Sudden road construction? Car won't know what to do. Draw a line around it? It will think its on a circular road.
AI doesn't have self awareness - it can't tell if what is saying is BS or not - a human has to intervene and tell them. Make a deep fake AI that does a poor job of imitating something? A human has to go, "yeah that wasn't believable."
The real gains from AI, which really is just a lot of statistics applied in real time via computer code, come from high volume things that even though a human would be better, it is not cost effective. Spam e mails have been greatly reduced by statistical algorithms. Those are easy things.
For example, almost all fraud detection and money laundering detection is done by so called "machine learning" models. I hate that phrase because all it really is are statistical regressions applied in real time. But anyways, those models weed out the transactions that are most likely to be fraud or money laundering. The ones most likely to be those are then reviewed by a human. Because as good as the models are, in general they still still are incorrect in some cases 99 times for each time they are correct one time! A human has to look through the data and go, "yeah that isn't money laundering."
Another caveat of humanity is we are less forgiving of machine error than human error. For example, despite autopilots being proven safer than actual pilots (the US military has the data from drone flights vs human flights), no airline would go 100% autopilot because humans are not trusting of it.
I think even if we had self driving cars that were safer than human driven cars, all it would take is a few high profile accidents per year for people not to trust it.