To those in Humane Tech, Machine Learning, and Economics — here are 3 new subfields to start!
If you want to work in one, I'll try to get you colleagues, funding, etc.
My big goal for 2023 is to gather researchers and makers, and boost their careers. Specifically, I want to help the three fields below come together. These fields will need researchers, institutional homes, and funding. I’ll do my best.
Field #1, a More Unified “Humane Tech”
When “the Center for Humane Technology” started, we hoped to gather inventors, makers, and researchers—especially those with a more optimistic and social-science-grounded vision for technology—and to help them build humane tech.
Unfortunately, we didn’t know what this more optimistic and social-science-grounded tech should look like. I left, and tried to figure that out, while Tristan took the wheel at CHT, and focused it on other things: public advocacy; educating lawmakers; advocating for big tech regulation; etc.
But I loved our original vision of an ecosystem of Humane Tech makers. I still feel it’s a critical requirement. So, that was sad.
Even without a convening entity, the field advanced. Most notably, Zebras Unite built a community that’s a bit like what I wanted to make, although focused much more on business models than on technological vision.
Others¹ filled in some of the vision: There are social technologists like Aviv Ovadya, RadicalXChange, the Computational Democracy Project, Amy Zhang, the Collective Intelligence Project, MakeSpace/Sprout. There are tools for thought, end-user coding, and introspection pioneers like Andy Matuschak, Othman Benkiran, and Elena Glassman; ML assistant aligners, like the AI Objectives Institute; OS imagineers like Bret Victor, Nick Punt, and Welf von Hören.
But… there’s still no convening, field-building entity. At least, not yet. But now that I’ve finally completed my own sketch of what humane tech is, maybe I can make up for lost time, and link this field up.
How to connect:
Field #2, Meaning-Aligned ML (plus, Big Data Virtue Ethics)
Next up, two related fields. Let’s call them “Meaning-Aligned ML” and “Big Data Virtue Ethics”. For now, it makes sense to treat them as one.
About virtue ethics: utilitarianism appeals because it seems precise and mathematical. But, as Max Novenstern wrote on twitter (and others have observed), the way to be a good agent is to have virtues, not to calculate outcomes.
If that's true, what virtues should one have? I don't think there's a fixed answer. Rather, people develop the virtues needed for the environments and relationships they find themselves in. So, what we want is not a set of virtues. It’s an algorithm: a way for a person or agent to recognize, conceptualize, and start living by a virtue that’s missing or needed in their environment.
In humans, moral emotions guide this process.
For Meaning-Aligned ML, the north star is teaching a machine to do this. And the goal of Big Data Virtue Ethics is to map the virtue/environment pairs which arise in human life. If we could do that, we’d have a survey of which contextual virtues make our society work.
I believe this would also let wisdom catch up with science. For a long time, people thought facts and theories could have strong epistemic foundations, but that ideas about what’s good or wise couldn’t. When these fields emerge, that will change. That’s huge. One of the biggest social changes since the Enlightenment.
How to connect:
Read about meaning-alignment on our notion.
DM me to support in other ways.
Field #3, an Economics of Values-Based Choice
In her ode to Derek Parfit, Ruth Chang wrote:
I remember being quite excited…, and thinking how, if I was right, then economic theory as a whole rested on mistaken foundations. That we would have a new grounding for affirmative action. That egalitarianism would need to be rejiggered. That we would have to reimagine reflective equilibrium as not geared toward a single point but towards multiple points.
Ruth here reflects on the hoped-for-consequences of work published in 1998. But, needless to say, the economic revolution hasn’t (yet) happened. But there are reasons to be optimistic. (I've been corresponding with Ruth, and we agree about this.)
It’s hard though. It means building an empire of math and social science, atop the slim foundations Ruth and I have laid — her with her parity-based choice logic; me, with methods to collect data about values. First up are alternatives to rational choice theory / expected value (I hear it’s in the works). From there, we probably need to replace game theory wholesale, and there’ll be other serious changes across microeconomics (watch out price theory, information econ, and organizational econ).
But what I’m keenest on is a renaissance in welfare econ and in social choice, plus new ways to align ML (see above), and a new branch of political theory. But that’s a long way away.
How to connect: Just email or DM me with relevant work. I’ll make an email list, and maybe a regular call.
How far we get with each field depends on who shows up.
Mostly, we’ll need researchers. If you know someone who belongs in one of these fields, please connect us.
I’d love for these to have institutional homes, and some amount of funding! If you know an institution that's well-suited, DM me. They can help by convening conferences or events, by funding research, with online community, by lending academic authority, etc.
Finally, if you want to know more about my own work in these fields, check out my new talk. Chapter 2 is about Meaning-Aligned ML, Big Data Virtue Ethics, and Values-Based Choice. Chapters 3 and 4 are my attempt to reframe Humane Tech.
Thanks to Andy M, Aviv O, Welf vB, and Ellie H for input on this post.
(Footnotes!)
Forgive me if your work is missing here, I’m just starting to sketch it out. But also: a field needs limits. Humane Tech cannot include all of STS, all of HCI, all of design or design theory, etc. I am still collecting opinions on what the limits should be, but I trend towards the following:
HT is about research and exploration, not critique and commentary. HT focuses more on unlocking upsides (collective intelligence, lives well lived, creativity, etc), vs curtailing downsides (bias, abuse, bad actors, etc). Although of course these are related. HT does not assume it already know the character of a good tech infrastructure (e.g., that it is decentralized, open source, end-user programmed, local-first, cryptographically secured, etc). Humane technologists may test one of these as a possibility, but they aren’t betting the farm.
Read more at What is Humane Tech?.
Everytime I read one of these, I'm so proud to play a part in your vision.