Reason why we are speculative towards AI taking over, AI as cohabitants, AI as the end of humanity is because we are not designing our AI systems on a stable infrastructure of needs. If we are giving human qualities to AI systems, we should at least have hierarchy of needs for it as an infrastructure to design from. Maslow’s Hierarchy of Needs organizes our psychological, safety, social and ego needs into categories starting from the most primitive. If we think like this for AI, what does their needs look like and how can we design to meet those needs so they don’t fail humanity.
What does a Humane Artificial Intelligence focused on enhancing human capabilities and empowering people, both as individuals and as a society as a whole, need in order to survive in the attention economy? These are some of the worldviews we have to consider in order to build fiduciary between humans and AI:
Maslow’s so-called ‘hierarchy of needs’ is often presented as a five-level pyramid, with higher needs coming into focus only once lower, more basic needs are met.
The Humane AI consortium aims to define an agenda for research, mobilize the research community, galvanize industrial support, and create public awareness to actively shape the ongoing AI revolution in a direction that will be beneficial to European citizens and society.
Monica Rogati talks about why more often than not, companies are not ready for AI because they don’t have the infrastructure.
Could technological advances in AI and robotics lead to the emergence of emotions that haven’t been quantified, identified, and understood yet?
At the core of building trust and a relationship with the user, designers need to be held accountable for creating products that clear pathways to support and empower the user.
Using critical design both as a theory and a tool to build resilience and sustainability in future-proofing the future.