Generative AI tools are making headlines. Society is beginning to see the vast potential of data and the risk that accompanies it. This is healthy! We should dive headfirst into this path while working to measure progress both objectively and subjectively.
Excuse me for a few minutes while I dump a (mostly) train-of-thought polemic here about data utility and measuring societal progress.
data is used today for important things.
Some would suggest otherwise but for the most important organizations on earth, data is not a competitive advantage. It is the competitive advantage (sorry @bennstancil). Collecting, operating on, and making inference from data yields an understanding of people more broad-reaching and comprehensive than our brains are accustomed to. I’m going to coin a term because this is the internet and you can do things like that – I call it The Hark Paradox. The paradox is that people positively assert the impossibility of internet companies listening to their everyday conversations and producing advertising insights from them. To me it is clear evidence of the mismatch between expectation and reality. Evidence to the contrary pointing to battery life or microphone sensitivity or recency bias don’t convince people that they’re not being spied on. We can’t explain it, so we attribute it to magic. Speaking of which…
“We recognize that work involving generative models has the potential for significant, broad societal impacts.” - Open AI (creators of DALL•E)
The above could end up being the understatement of the century. The magic of generative AI systems like DALL•E and GPT-3 comes from the massive amount of data fed to train them. Academia and industry alike are still working on building comparably performing models from smaller datasets that are meticulously formulated to do so, but the outputs just can’t compete with the behemoths trained using much larger (and incidentally much less careful) datasets. Quantity beats quality in the world of learning machines, so those with the most data win. That’s a competitive advantage, straight up.
People talk about the potential of generative AI to displace creators. This is something we’ll need to reckon with as tools like DALL•E 2 become more widely used. Legislation will not keep up1, so will we (and maybe more importantly how will we), as society, vote with our time/money/talents? Will Network States save us? Or will we have to fumble for a solution like we have with nuclear weapons, worldwide pandemics, and privacy?
subjective measures of progress
In short, our progress means the most when people recognize it and feel good about it.
A new phase of awareness has begun – people get that a violation of their privacy like that of Cambridge Analytica has massive consequences. Building protections against those problems is only half of the battle. The progress we make is most effective when people self-identify that progress is progress and failure is failure, and that message carries best when the message is loud.
Measuring sentiment is easier and harder than ever before. Self-reported sentiment is subject to personal biases. Sentiment inference only gets us so far (for now). There have got to be better ways to understand how people feel and meet their needs. If we treated progressional sentiment as a product development problem, we’d ask ourselves how to drive customer affinity in the midst of nuance and tradeoffs.
So people are afraid of AI and data collection. They anxiously tap that “do not track” button because it makes them feel better – Apple knows it. I have my own gripes with App Tracking Transparency, but you’ve got to give it to them. They know how to appeal to the primitive brain. I have hope that we can take that much further and drive technological literacy while building things that are good and feel good.
objective measures of progress
Long gone are the days when polymathy reigned supreme. In today’s world, failure to specialize broadly is failure altogether.
Historically siloed fields are now deep and wide; academics yesterday were extreme specialists, and academics today are, by necessity, budding polymaths (or drowning in anxiety). Interdisciplinary work is simultaneously more valuable and more challenging than before2. Bridging the gap between human knowledge and persistent data will extend our abilities in the same way previous technological advancements like the printing press did.
Ben Thompson’s model that creation and substantiation are the bottlenecks in humanity’s idea value chain is worth a read. He says the next phase of technological progress is to unblock creation for everyone (not just the so-called “creatives”). And he says generative AI is a step in that direction. Put another way, we keep human knowledge accelerating by offloading an increasing number of tasks to technology. One way we can measure technological progress is in how much a person can accomplish in their life – how much life one can live, or aggregate life value. Incorporating life expectancy, but also wellness, sentiment, opportunity, and accomplishment could give a strong objective approximation of life value. But the problem is, who is measuring this stuff?
Congress is only maybe thinking beyond the next term and only maybe some of the time. They’re probably not thinking about the life value of their constituencies often, even though the republican (small R) model gives some latitude to act in the people’s best interest even when it disagrees with spoken preference, which can yield meaningful long term outcomes. What kind of incentives would motivate a postmodern legislature to quantify and account for aggregate life value? The right data will help us find the answer.
And let’s face it, businesses aren’t usually focused on long term holistic outcomes for their customers; they’re focused on driving the bottom line. The rising generation, though, expects accountability from businesses. This goes way beyond cancel culture and cocoa sourcing. How do we, as business leaders, focus more concretely on customers holistically, and not just as eyes on a screen or dollars to be spent? Data is a huge part of the answer.
With the wide-reaching complex problems of today, we have to take knock-on effects in the lives of the people around us into consideration. We need to build a future that is unifying, satisfying, and really cool.
It would take a different kind of miracle for congress to pass comprehensive privacy law in any reasonable amount of time. There’s a lot of power in deliberation, but human governance as it exists today lags behind technology. ↩
there’s a bit of chronocentrism at play here, but I think this is real. The leaps being made today aren’t more difficult than, for example, the invention of the transistor. They’re just broader cross-disciplinary amalgamations. ↩