Carlos, you've called the December 2023 workshop “groundbreaking.” As you know, there have been countless conferences of this sort on AI. What makes this one stand out?
Well, something distinctive about our workshop is that from the beginning, we decided not to target academics working on AI only. I wanted to have people with different views. It has been important to me in my life as a researcher to draw inspiration from people from all walks of life. Thomas Åstebro and I wanted to create a connection between researchers and people who were simply curious about the revolutionary change, be they academics or not. The idea was also to have in the same room with us industry experts. Here at HEC, my colleagues have the connections with these industrial kingpins.
The governance of risk and its framework is an interesting area of study that was brought up by the participants.
I found that the industry experts were generally thinking ahead of academics: the long-term questions such as organizational learning, organization of the firm, governance and risk management. These are questions that were brought to the table by industry experts. They wanted to know about research on those topics and some of them said they wish we could have focused more on that in the conference. Take organizational learning, how the gradual digitalization and optional use of AI will increase transparency within the firm. Or the algorithms and how AI and humans cooperate. Leaders like Andrea Pignataro (Founder and CEO, ION,, Ed.) and Dr. Lobna Karoui (President, AI Exponential Thinker, Ed.), thought that this is very relevant and is convinced this is going to have implications on how firms organize themselves. Indeed, the interaction with AI is not only relevant from a theoretical point of view, but also for companies. Then there were discussions on governance and risk management. When you use artificial intelligence or automatized processes, risk gets transformed, it doesn't go away. If everything is done by machines, then you need to have risk management for machines. At the moment, we have a lot of risk management for humans, a lot less for machines. But as we transition into an environment where machines will do a lot more of what we do, we have to rethink on how to manage that risk. So, the governance of risk and its framework is an interesting area of study that was brought up by the participants.
There has been a lot of debate about job loss due to AI’s Large Language Models (LLMs) like ChatGPT. One of your speakers, Pamela Mishkin (from the San Francisco OpenAI company), presented research on the impact on the American labour market, and she concludes that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted. How do you respond to such research, Carlos?
By being cautious because what Pamela shows us is the upper limits of how ChatGPT could affect the American workforce. That doesn't mean the humans currently employed for those tasks will be replaced. You have to factor in the cost of adaptation, how effective the AI might be, and how easy it is to implement. It will take a while to really figure it out. And, then, the technology is likely to offer new opportunities, new jobs, new tasks. Nevertheless, it’s a very nice study, a first step towards understanding the labor implications. But it's only the beginning.
Pamela concludes that LLM's could have considerable economic, social and policy implications in the United States, for example. What universal implications are there, that can be extrapolated towards Europe, Asia or beyond?
So, putting aside these consequences and whether they might be positive or negative, there is a big difference between now and when the Internet arrived. In my early 20s, access to the Internet, needed a computer. It was expensive to access the Internet with a good connection. And you also need the hardware you know, modems, etcetera. So it wasn't accessible for many people even in Western nations, much less in developing countries. This, it’s different. For better or for worse, we have fast Internet connections, through cell phones or on home computers. And in terms of language, you don't need to know computer science or anything like that. You can interact with the LLMs as long as you’re literate. And you don't need to know English alone. You can actually interact in your own language, French, Spanish, even vernacular languages. So, the benefits will not necessarily stay in the United States. This is the first time, that there’s a level playing field. Everyone is having access to this. You know whether you are at HEC in the office or in rural settings worldwide, where traditionally access to technology is more challenging.
This was one of many topics hotly debated at the workshop which explored a diversity of issues. They ranged from a study on Generative AI and human crowdsourcing to exploring how cost effective it would be to automate human tasks with AI. Your own research has been on patents, their market, their value and use. But in what research have you used the juncture between AI and entrepreneurship? And what links can you establish with patents or your other research interests like the strategy and financing of entrepreneurial activities?
OK, so people have been using machine learning and the study of the economics of innovation, especially in relation to patents and patent landscapes, for a while. It’s a topic of interest for corporates and consultants. Patent landscapes predict technological trends, so researchers have been studying this for a while, using this tool to refine existing measures of similarity between patent portfolios of companies. For instance, these techniques can be useful to assess the synergistic value between acquirers and potential targets and markets for technology. However, what I find really interesting is when the new technology opens up the possibility to investigate an area that, in the absence of such technology, would have been otherwise impossible or extremely hard to carry out. What excites me is that this new technology, especially large language models like Chat GPT applications, allows us to quickly process massive amounts of text. And this can be a game changer. For instance, the interaction between the inventors and patent examiners determines if an invention in a patent application is truly noble. We use a lot of text, there's a lot of data. So, access to artificial technology and other new tools will allow us to actually dig down into that documentation. It has the potential for us researchers to understand what innovations truly innovate. It also helps us to better understand the process. And, finally there is the potential for the patents office to improve, to make the processing of patent applications faster.