Position statement from Centro-i for the Society of the Future


In recent days, an open letter from the Future of Life Institute was published calling for a 6-month moratorium on the development of large AI models, particularly any model that exceeds OpenAI’s chatGPT 4 in size or sophistication. The authors request that, in the absence of a voluntary, transparent and enforceable moratorium, it be the authorities and governments that take measures in this regard.

Centro-i has joined as a signatory to this letter, since we see it as an urgent call for attention to technology companies, the community and governments to ensure that upcoming developments are built responsibly, under principles of ethics and inclusion.

The call comes from those who know best about AI systems and their implications. Authors and signatories include researchers from some of the world’s leading universities; developers of the most advanced AI projects, such as DeepMind; as well as executives and co-founders of technology companies. They do not come from people who want to stop the development of AI or who underestimate its potential for positive transformation in society. On the contrary, this group recognizes the unimaginable potential of these developments.

Those of us familiar with the tech industry know that social and regulatory frameworks have always struggled to keep up with technological advances. AI represents a paradigm break in the speed of these advances. The same was said decades ago about the development of microchips: so-called “Moore’s law” described how computing power began to grow exponentially, doubling every one or two years. But to begin to imagine the acceleration that we are witnessing, two things must be taken into account:

1) In AI, Moore’s law applies to everything we can imagine. Even though we have observed exponential growth in computing power, that does not necessarily imply exponential growth in its impact. A lawyer with a computer twice as fast is not going to be twice as productive. But with AI comes what Sam Altman, CEO of OpenAI, has called a «Moore’s Law for everything.» As AI doubles in power, any activities it can perform will also roughly double in performance and impact.

2) Advances in the capabilities of AI systems are taking place at an unprecedented pace. Altman estimates that the capacity of AI models is growing by a factor of 10 every year. To try and picture the difference between the old Moore’s Law with base 2 and this new pace with base 10: Imagine you start with 1 grain of rice. If it doubles every year, in ten years you would have 1024 grains. If instead they had multiplied by 10 each year, you would have 10,000,000,000. Ten billion. And that’s assuming the growth base remains constant.

Even if an agreement is reached to apply a moratorium, we at Centro-i don’t anticipate that much momentum will be lost in AI development. If some companies were to commit, it would be likely that many other developments would follow their course. In the best of cases, we could aspire to a small moment of respite, which will never be enough to completely foresee the future course of AI and its impacts. In any case, we join the call for companies, communities and governments to urgently commit themselves to reflection, the conscious design of best practices, ethical guides, and the regulations that are necessary. It is essential that ethical principles be applied in AI to obtain the greatest social benefits, reducing its risks. There are already guidelines to follow, such as the UNESCO Recommendation on the Ethics of Artificial Intelligence, signed by 193 countries.

And it is not only about applying criteria and processes, but about creating institutions and governance mechanisms for continuous monitoring and adaptation in the face of this phenomenon that will change our lives in ways that we can hardly imagine from now on.

English English Spanish Spanish
Share This