I’ve said a lot publicly about the idea of AI self awareness and how the public perception of what makes AI scary is flawed. In most of these monologues, I’ve pointed out that what scares me isn’t the idea of killer robots or sentient routers choking the life out of me with prehensile cat5 cables. Instead, what scares me most is the idea of a slowly creeping bias making its way into the algorithms we depend on, and that bias being present in legacy systems up to 50 years into the future.
Imagine what our present would be like if machine learning algorithms had been invented in 1968 and everyone developing content for those algorithms had built Natural Language Processing algorithms using a 1960’s lexicon.
Now imagine a present where those systems aged and became legacy over time, but were still built on top of algorithms written and published at a time where segregation advocate George Wallace was an electable candidate. Unsurprisingly, what you’d get out of an NLP algorithm like this would probably be underwhelming to our more modern sensibilities…Then again, we did just elect the George Wallace of the 21st century, so what do I know?
If bias is the problem, then self awareness is the solution.
As humans, we are able to account for bias by applying our incredible capacity to be self aware, or otherwise critical of the information that is handed to us both in hard values, and in reckoning the space around us as it is being presented to our senses. When the general public thinks about machine self-awareness, it’s a vague notion buttressed by the idea that machines will realize how bad humans are and scheme a way of wiping all humans from existence. It’s got the draw of being poetic; the child killing the parent, but not entirely realistic. Rather, self awareness could mean many other things including a self-aware understanding of bias within information being given to the machine.
If we accept this definition of self-awareness, then we can imagine 3 different types of applied self-awareness:
- Constructive – Identifies bias within data and self corrects for the presence of bias outputting unbiased data.
- Destructive – Identifies bias within data and perpetuates new biases into outputs.
- Apathetic – Identifies bias within data and does nothing to correct for the presence of bias perpetuating biased outputs.
There will be more on this issue in the future, but for now this is something to think on and build upon.