NEWS FROM THE FUTURE: Emojis — the language only AI can read
From grunts, to words, to emojis, to hieroglyphs
21 January 2025 - 05:00
byFUTUREWORLD
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Dr Jens Limhurst took off his Emersio goggles and looked at the sequence of nine simple and commonly used emojis on the screen in front of him.
He had followed a hunch ever since he got his hands on a pair of artificial intelligence (AI)-powered Emersio goggles from one of Dubai’s hottest start-ups. Today it looked like the hunch was correct. Limhurst glanced over at the second screen where a two-page story was displayed, telling a tale about how a nice chap went mad over a girl. He donned the Emersios, shuffled the emojis around, and saw a new two-page story develop.
This one spoke about how food and drinks could lead you into foolish actions. Next, he changed viewpoint and told the AI to use his research assistant’s context instead of his own. The two sequences of emojis now told two completely different stories.
The hypothesis Limhurst had worked on was that a picture says more than a thousand words, and the story it tells depends on the image’s position relative to other pictures. Analysing this without Emersio’s AI had been impossible, but now, with the enormous amount of data processing power available, he was about to crack it.
However, the last experiment showed something far more complex; the context of the reader determined what story the images told. No wonder teenagers seemed to be able to communicate with a jumble of emojis and slang that no adult could decipher.
Limhurst took off the Emersios again and looked up at a picture of the Temple of Edfu hanging on the wall. It was covered in hieroglyphs. “Could it be,” he thought, “that the hieroglyphs aren’t simple pictographic language, but actually an advanced and data dense language that only AI can read?”
As he quickly fed the AI model with a database of every known hieroglyph his pulse quickened. “I wonder what it will tell me?”
• First published on Mindbullets January 16 2025.
Trust me, I’m an AI
Smart software is so smart, we trust it more than humans
Dateline: January 14 2025
“Trust me, I’m a doctor!” is an old cliché that we’ve all learnt to take with a pinch of salt, but now there’s a new benchmark for trust.
How often, when a friend or colleague spouts a factoid or news bite do you check it out on Google for accuracy? And when driving to a new destination or trying to beat traffic, you rely on your smart device — car or phone — for navigation, don’t you? You’d never ask a stranger for advice.
All these things are driven by AI and we’ve come to rely on them because they’re usually right. Machines learn by consuming vast amounts of data and they get constant feedback from other machines, “adversarial networks” that evaluate their performance on the job. No humans can handle that level of throughput.
As a result, we’ve now got AI systems that tell us who to hire and who to fire, when to buy and sell, what to plant where and when, and even who to date. Our contracts and tax returns are checked by AI, our medical scans and test results are screened by AI, and in some societies our behaviour is automatically evaluated, to see if it’s socially acceptable. By AI.
Sure, there are biases and error bars in any system, but these are mainly baked in by the architects and designers, who after all are only human. And there’s bias and prejudice in every society so you can’t really blame the computers for picking up on it. Ethics and norms are a social problem, not a data construct.
And most of the time, almost all of the time, the machines are more accurate than people, better than humans, at finding errors and diagnosing conditions, identifying criminals and the like.
You can take it from me, it’s true. Trust me, I’m an AI.
•First published on Mindbullets June 6 2019.
Despite appearances to the contrary, Futureworld cannot and does not predict the future. The Mindbullets scenarios are fictitious and designed purely to explore possible futures, and challenge and stimulate strategic thinking.
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
NEWS FROM THE FUTURE: Emojis — the language only AI can read
From grunts, to words, to emojis, to hieroglyphs
Dateline: December 23 2038
Dr Jens Limhurst took off his Emersio goggles and looked at the sequence of nine simple and commonly used emojis on the screen in front of him.
He had followed a hunch ever since he got his hands on a pair of artificial intelligence (AI)-powered Emersio goggles from one of Dubai’s hottest start-ups. Today it looked like the hunch was correct. Limhurst glanced over at the second screen where a two-page story was displayed, telling a tale about how a nice chap went mad over a girl. He donned the Emersios, shuffled the emojis around, and saw a new two-page story develop.
This one spoke about how food and drinks could lead you into foolish actions. Next, he changed viewpoint and told the AI to use his research assistant’s context instead of his own. The two sequences of emojis now told two completely different stories.
The hypothesis Limhurst had worked on was that a picture says more than a thousand words, and the story it tells depends on the image’s position relative to other pictures. Analysing this without Emersio’s AI had been impossible, but now, with the enormous amount of data processing power available, he was about to crack it.
However, the last experiment showed something far more complex; the context of the reader determined what story the images told. No wonder teenagers seemed to be able to communicate with a jumble of emojis and slang that no adult could decipher.
Limhurst took off the Emersios again and looked up at a picture of the Temple of Edfu hanging on the wall. It was covered in hieroglyphs. “Could it be,” he thought, “that the hieroglyphs aren’t simple pictographic language, but actually an advanced and data dense language that only AI can read?”
As he quickly fed the AI model with a database of every known hieroglyph his pulse quickened. “I wonder what it will tell me?”
• First published on Mindbullets January 16 2025.
Trust me, I’m an AI
Smart software is so smart, we trust it more than humans
Dateline: January 14 2025
“Trust me, I’m a doctor!” is an old cliché that we’ve all learnt to take with a pinch of salt, but now there’s a new benchmark for trust.
How often, when a friend or colleague spouts a factoid or news bite do you check it out on Google for accuracy? And when driving to a new destination or trying to beat traffic, you rely on your smart device — car or phone — for navigation, don’t you? You’d never ask a stranger for advice.
All these things are driven by AI and we’ve come to rely on them because they’re usually right. Machines learn by consuming vast amounts of data and they get constant feedback from other machines, “adversarial networks” that evaluate their performance on the job. No humans can handle that level of throughput.
As a result, we’ve now got AI systems that tell us who to hire and who to fire, when to buy and sell, what to plant where and when, and even who to date. Our contracts and tax returns are checked by AI, our medical scans and test results are screened by AI, and in some societies our behaviour is automatically evaluated, to see if it’s socially acceptable. By AI.
Sure, there are biases and error bars in any system, but these are mainly baked in by the architects and designers, who after all are only human. And there’s bias and prejudice in every society so you can’t really blame the computers for picking up on it. Ethics and norms are a social problem, not a data construct.
And most of the time, almost all of the time, the machines are more accurate than people, better than humans, at finding errors and diagnosing conditions, identifying criminals and the like.
You can take it from me, it’s true. Trust me, I’m an AI.
•First published on Mindbullets June 6 2019.
Despite appearances to the contrary, Futureworld cannot and does not predict the future. The Mindbullets scenarios are fictitious and designed purely to explore possible futures, and challenge and stimulate strategic thinking.
AI looms large over WEF in Davos
US pulls plug on TikTok
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
Most Read
Published by Arena Holdings and distributed with the Financial Mail on the last Thursday of every month except December and January.