Max Tegmark: Will AI be the best or worst thing ever for humanity?
“We have less than 50% probability of society surviving the next 50 years.”
Max Tegmark, a leading scientist and authority on AI, is adamant that the future we want, is the future we need to design. This isn’t science fiction. Max founded the ‘Future of Life Institute, which counts Elon Musk and Larry Page as members, to drive a global agenda toward a future where AI is designed to improve life.
The risks of AI are plentiful – we could develop AI systems that ‘outsmart us’, selfish leaders could create an AI powered nuclear war, but the rewards are obviously far superior. Redistribution of wealth so we can work in jobs we want to work in, healthcare and education will be greatly improved, poverty could be eliminated, and we can do more of what we like doing.
But this reality requires thinking and acting on the possibilities of AI and designing a way to control and work with this technology. It requires a global shared vision.
But getting there doesn’t seem as simple as it sounds.
Listen below on your favourite site or watch on Youtube.
- AI Will become the best thing or the worst thing to ever happen to humanity. The upside is unfathermably large, everything we love about cilivisation, is aproduct of human intelligence, so if we can amplify our intelligence iwth artificial intelligence, it raises the possibliity that we could solve problems that we are stumped by today. Cancer, poverty, and help us spread from earth into the cosmos.
- AI is not evil nor is it morally good though. A lot of people are making a mistake of treating tech as a new religion. They have a mantra that more technology is automatically good technology. Technology is a tool, it will come down to what we do with it.
- The power of AI means we cannot afford to make mistakes. We as a society, have traditionally learnt from mistakes, with knives, fire, and nuclear weapons. But AI is a more powerful technology and ‘one mistake could be one too many.’
- COVID can be seen as an opportunity for society to learn to future plan. In most countries, we reacted slowly to the pandemic, be it masks, lockdowns, restriction of movement etc, and the consequences of this delay, was significant. The point is, don’t be ignorant to the consequences of inaction (meaning AI), be ready for all the possibilities and design the future we want.
- AI has already proven to be significantly valuable in helping to improve healthcare. It’s ability to process imaging data of patients, as well as scale in the cloud to those that cannot afford treatment, is one of the better ‘positive’ use cases of AI helping humanity.
- Big tech vs Government – The tech companies, such as Facebook and Google are leading the AI progression. What is needed is more investment from Governments in open universities and research. “It’s up to the Governments to level the playing field. Governments can encourage open research that can benefit the entire community vs self serving proprietary IP. But can the Government afford the talent vs the deeper pockets of tech, and is funding AI a popular policy decision.
- AI and technology is not a zero sum game – it doesn’t have to progress at the expense of others. Technology can make everyone better off at the same time. If we do this right as a species, we increase goods and services for everyone, and it can be easy for us to distribute equally. But this will not happen by default.
- We shouldn’t fear job losses – we aren’t in a competition with the machines. As long as we collect enough taxes we can ensure everyone is better off. The problem is the decisions on distribution of wealth ‘could be’ made by tech nerds who are not qualified to make such decisions.
- What sort of society do we want? Technology can enable us to work in fields that we want to work in, and let the machines take over the jobs that ‘suck’. The key is to ensure that governments distribute the wealth, and put more money into the fulfilling jobs such as teaching and nursing. One of the only solutions is to increase tax on companies and individuals, and close the loophole where tax is strategically minimized. Unfortunately, right now, and historically this has proven to be unpopular with voters.
- The future of life institute was founded on the premise that we should aspire to make AI improve society. We should be enjoying our wealth, getting dramatically healthier and happier, however we are spending far too much time arguing over the 1% and ripping each other off, that we are jeopardising the entire survival of our species.
- Who should control the power of AI Big tech and governments appear to be in an ‘AI arms race’ with varying moral and ethical standards. The pessimistic AI school of thought is we could potentially destroy humanity with unintelligent or reckless leaders leveraging AI for military or personal gain.
- AI in itself isn’t evil, the bigger fear is that it becomes ‘competent’ and it’s goals are misaligned to ours. “We have less than 50% probability that society we will survive the next 50 years.” The opportunity is for us to design the AI to benefit, whilst being mindful of its potential.
- We need a shared global AI vision. AI knows no boundaries, and if we are going to benefit from it, we need a global vision. Such a simple statement, loaded with pessimistic realities as to whether we as humans can actually achieve this.