Compiled by Zhidixcom (public account: zhidxcom) | Edited by Chen Junda | Panken Zhidixi reported on June 25 that recently, Mira Murati, chief technology officer of OpenAI, attended the graduation ceremony of Dartmouth Thayer School of Engineering. At the event, a 50-minute in-de

Zhidixi (public account: zhidxcom)

compilation | Edited by Chen Junda

| panken

Zhidongxi News on June 25, recently, openai chief technology officer mira murati (mira murati) at Dartmouth Thayer School of Engineering At the graduation event, a 50-minute in-depth interview was conducted with Jeffery Blackburn, a former Amazon executive and current Dartmouth College trustee.

▲Mulati at Dartmouth College (Source: Dartmouth College)

In this interview, Mulati shared his rich experience from the aerospace industry, the automotive industry, VR/AR to joining OpenAI Career, and based on what I saw and heard at the forefront of the industry, I analyzed issues such as AI governance, AI's impact on education, and AI's impact on work.

She revealed in the interview that will have a PhD-level intelligent system next year or the year after that, which may refer to gpt-5. She even put forward a very controversial point of view. Some creative jobs should not exist, and AI will soon replace these positions. This view of caused an uproar on the Internet. believed that openai was just messing with the pot and did not understand what creativity meant.

Mulati believes that OpenAI's achievements are inseparable from the superposition of three factors: deep neural networks, large amounts of data, and large amounts of computing power. Although they are still studying the principles behind it, practice has proved that deep learning really works.

She said that AI security and AI capabilities are two sides of the same coin, and only smart models can understand the guardrails we set for it. From an engineering perspective, the improvement of 's AI capabilities will not reduce the security of the model. openai has a great responsibility for the security of the model, but to achieve effective risk management and control, the participation of society and government is also essential. openai is actively working with governments and regulatory agencies to jointly solve AI security issues.

The audience present also asked Murati pointed questions. In response to the audience's question about model values, Mulati mentioned that openai has integrated human values ​​​​into the AI ​​system through human feedback reinforcement learning, , but the focus in the future will be on the basic value system to provide customers with highly customized Model value system.

viewers also asked Mulati for his views on openai’s recent infringement suspicions and the licensing and compensation issues of content creators. Mulati once again emphasized that openai did not deliberately imitate Scarlett's voice, and her decision-making process for selecting her voice was completely independent.

As for copyrighted content, OpenAI is currently trying to use an aggregated data pool to allow creators to provide copyrighted content into the data pool, evaluate the contribution of creative content to model performance as a whole, and provide corresponding rewards. However, this technology is quite difficult, and it will take some time to actually implement it.

Unlike OpenAI CEO Sam Altman, Mulati previously had less public awareness. She was born in Albania in 1998 and studied in Canada and the United States.

She joined openai in 2018 and is one of its early members. As OpenAI's CTO, she leads OpenAI's work on ChatGPT, Dall·e, Codex and SoRa, while also overseeing the company's research, product and security teams.

Microsoft CEO Satya Nadella said of Mulati that she has both technical expertise and business acumen, and has a deep understanding of OpenAI's mission.

The following is a complete compilation of Mulati’s in-depth interview at Dartmouth College (in order to improve readability, Zhixixi adjusted the order of some questions and answers, and made certain additions, deletions and modifications without violating the original meaning):

1. Having worked in aerospace, automobiles, VR/AR and other industries, I found that I am most interested in AI

Jeffrey Blackburn: Everyone wants to hear about your current situation and what you are building. This is really fascinating. . But maybe we should start with your story. After you graduated, you went to Tesla for a while, and then OpenAI. Can you briefly describe that period to us, and the story of your involvement in the early days of OpenAI.

Mira Murati: I actually worked in the aviation field briefly after graduating from college, but then I realized that the development of the aviation field was quite slow. I was very interested in Tesla's mission and the innovative challenges of building a sustainable future for transportation, so I decided to join Tesla.

After working on the Model S and Model X, I realized that I didn't want to work in the automotive industry either. I wanted to do something that would really move society forward while solving some very difficult engineering challenges.

When I was at Tesla, I was very interested in technologies such as self-driving cars, computer vision, and AI, and their applications in self-driving cars. At that time, I wanted to learn more about other areas of AI. So I joined a start-up company, where I led the engineering and product teams, applying AI and computer vision to the field of spatial computing to study the next interface of computing.

At that time, I thought that the interactive interface for computing would be VR and AR, but now I think differently. At that time, I thought that if we could interact with very complex information with our hands, whether it was formulas, molecules, or topological concepts, we could understand these things more intuitively and expand our knowledge. However, it turns out that it was too early to talk about VR at that time.

But this has given me many opportunities to learn about AI technology in different fields. I think my career has always been at the intersection of technology and applications. gave me a different perspective and a general understanding of how far AI has developed and what areas it can be applied to.

Jeffrey Blackburn: So in Tesla’s autonomous driving research, you see the possibility of machine learning, deep learning, and the direction of its development.

Mira Murati: yes. But I didn't see it clearly.

Jeffrey Blackburn: Have you ever worked for Musk?

Meera Murati: Yes, especially in the final year. But at that time, we were not quite sure about the development direction of AI. At that time, we still only applied AI to specific application scenarios, not general scenarios. The same goes for VR and AR. And I don't want to just apply these techniques to specific problems. I'd like to do more research and understand the principles behind it, and then start applying these techniques to other things.

I joined openai at this stage. Openai's mission is very attractive to me. It was a non-profit organization at the time. Now the mission has not changed, but the structure has. When I joined 6 years ago, it was a non-profit organization dedicated to building secure AGI (artificial general intelligence). At that time, openai was the only company other than deepmind doing relevant research. This is the beginning of my journey in openai.

2. 3 major technological advances have made chatgpt possible, and practice has proven that the model can deeply understand the data

Jeffrey Blackburn: Got it, so you've been building a lot of things since then. Maybe you can provide some basic knowledge of AI for the audience present. From machine learning, deep learning to now AI, these concepts are all related to each other, but they are also different. How did these transformations occur, and how did they make products like chatgpt, dall·e or sora possible?

Mira Murati: In fact, our products are not brand new. In a sense, our products are based on the joint efforts of mankind in the past few decades. actually started at Dartmouth College.

Over the past few decades, the combination of neural networks, large amounts of data, and massive amounts of computing power has led to truly transformative AI systems or models that are capable of performing common tasks. Although we don’t know why it is successful, deep learning really works. We also try to use research and tools to understand how these systems actually work. However, based on our experience in studying AI technology in the past few years, we know that this path will work. We have also witnessed their gradual progress.

uses gpt-3 as an example, a large language model deployed about three and a half years ago.The goal is to predict the next token, basically the prediction of the next word. We found that if we give this model the task of predicting the next token, train this model with a large amount of data, and give it a large amount of computing resources, we can also get a model that truly understands language, and its understanding level is similar to humans.

It forms its own understanding of the patterns of these data by reading a large number of books and information on the Internet, not just simply memorizing it. We also found that this model can handle not only language, but also different types of data, such as code, images, videos and sounds. doesn't care what data we enter.

We found that the combination of data, computation and deep learning works very well, and the performance of these AI systems will continue to improve by increasing the data type and the amount of computation. This is the so-called scaling law. It is not an actual law, but a statistical prediction of improved model capabilities. This is what drives today’s AI progress.

Jeffrey Blackburn: Why did you choose a chatbot as your first product?

Meera Murati: As far as the product is concerned, we actually started with the API, not the chatbot. Because we don't know how to commercialize gpt-3. Commercializing AI technology is actually very difficult. We initially focused on technology development and research. We believed that as long as we built an excellent model, business partners would naturally use it to build products. But then we discovered that it was actually very difficult, which is why we started developing our own products.

So we set out to build a chatbot ourselves, and we tried to understand why a successful company couldn't turn this technology into a useful product. We ultimately discovered that this was actually a very strange way to build a product - starting from the technology rather than starting from the problem to be solved.

3. Model capabilities and security complement each other. Only smart models can understand the guardrails set by humans

Jeffrey Blackburn: With the growth of computing power and data, intelligence seems to be developing in a linear way. As long as these elements are added , it becomes smarter. How fast has chatgpt developed in the past few years? When will it achieve human-level intelligence?

Mira Murati: In fact, these systems have reached a level similar to humans in some fields, but there are still gaps in many tasks. Depending on the trajectory of the system's development, a system like gpt-3 might have toddler-level intelligence, while gpt-4 would be more like a smart high school student. In the next few years, we will see them reach PhD level intelligence on specific tasks. The pace of development and progress is still very fast.

Jeffrey Blackburn: Are you saying there will be a system like this in a year?

Mira Murati: A year and a half. Perhaps by then there will be AI systems that can surpass human performance in many fields.

Jeffrey Blackburn: This rapid growth in intelligence has sparked a discussion about security. I know you have been paying close attention to this topic, and I am happy to see you researching these issues. But we really want to hear your point of view.

Suppose that in 3 years, when the AI ​​system becomes extremely smart and can pass every bar exam anywhere and every test we design, is it possible that it will decide to connect to the Internet on its own and start acting autonomously? Will this become a reality? As the CTO of OpenAI and the person leading the product direction, will you think about these issues?

Mira Murati: We have been thinking about this issue. We are bound to have behavioral AI systems that can connect to the Internet, talk to each other, complete tasks together, or work with humans to achieve seamless collaboration with humans.

As for the security issues and social impact of these technologies, I think we cannot solve them after the problems arise. Instead, we need to embed the solutions to the problems into the technology as the technology develops to ensure that these risks are properly handled.

Model capabilities and security are complementary, they go hand in hand. is much easier to tell a smart model not to do something than to get an unsmart model to understand the concept. It's like the difference between training a smart dog and a non-smart dog. Intelligence and security go hand in hand. Smarter systems better understand the guardrails we set.

Currently, everyone is debating whether more security research or AI capability research should be conducted. I think this view is misleading.

Because when developing a product, of course you have to consider safety and guardrails, but when it comes to R&D, they actually complement each other. We think we should look at this issue very scientifically, try to predict what capabilities the models will have before they complete training, and then prepare corresponding guardrails in the process.

But so far this is not the norm in the industry. We train these models, and what's called emergent capabilities emerge. These capabilities emerge out of the blue, and we don’t know if they will ever appear. Although we can see performance improvements in the data, we don’t know whether this improvement in data means that the model does a better job in translation, biochemistry, programming, or other aspects.

Doing scientific research on the power of predictive models can help us prepare for what’s to come. All security research work on is consistent with the development direction of technology and must be implemented together.

4. The risk of deep forgery is inevitable, and multi-party cooperation can solve the problem

Jeffrey Blackburn: Mira, there are also AI-forged videos of Ukrainian President Zelensky saying "We surrender" on the Internet, or Tom... What do you think about Hanks’ video of a dentist commercial? Is this an issue that your company should control, or does it require relevant regulations to regulate it? How do you view this issue?

Meera Murati: My view is, this is our technology, so we are responsible for how users use it. But this is also a shared responsibility with people, society, governments, content producers, media, etc. We need to figure out how to use these technologies together. But in order to make it a shared responsibility, we need to lead people, give them access, give them the tools to understand these technologies, and give them the appropriate guardrails.

I don't think it's possible to be completely risk-free, but the question is how to minimize the risk and give people the tools to do so. Taking the government as an example, it is very important to study with them and let them have access to things in advance, so that the government and regulatory agencies understand what is happening in the enterprise.

Perhaps the most important thing chatgpt has done is to make the public aware of the existence of AI, allowing people to truly intuitively feel the capabilities of this technology as well as its risks. When people try AI technology for themselves and apply it to their own business, they can clearly realize that it cannot do certain things, but it can do many other things, and understand what this technology means for themselves or the entire labor market. What. This allows people to prepare.

5. Cutting-edge models require more supervision, and prediction of model capabilities is key

Jeffrey Blackburn: This is a good point. The interactive interfaces you created, such as chatgpt and dall·e, let people know What's coming in the future. There is one last point I want to make about government. You want some regulations to be put in place now, rather than waiting a year or two from now when the system becomes really smart, maybe even a little scary. So what exactly should we do now?

Meera Murati: We have been advocating for to have more supervision of leading-edge systems . The capabilities of these models are excellent, but because of this the negative impact due to abuse is also greater.We have always been very open to policymakers and working with regulators. For smaller models, I think it's good to allow a lot of breadth and richness in the ecosystem , and not keep people from participating in innovation in this area because they have less computing or data resources.

We have been advocating for more regulation of cutting-edge systems, where the stakes are much higher. And, you can anticipate what’s coming, rather than trying to keep up with changes that are already happening quickly.

Jeffrey Blackburn: You may not want the US government to regulate the release of gpt-5, right? Let them tell you what to do.

Meera Murati: It depends on the situation. A lot of the security work we are doing has been codified by the government into the guidelines for AI supervision. We have completed a lot of work on AI security, and even provided relevant principles for AI deployment to the US government and even the United Nations.

I think that to do a good job in AI security, you need to really participate in AI research, understand what these technologies mean in practice, and then create relevant regulations based on these understandings. This is what is happening right now.

For regulation to stay ahead of these cutting-edge systems, we need further scientific research in the field of prediction of model capabilities in order to propose the right regulation.

Jeffrey Blackburn: I hope there are people in government who understand what you are doing.

Mira Murati: It seems that more and more people are joining the government and these people have a better understanding of AI, but it is not enough.

6. All knowledge-based work will be affected by AI. AI makes the "first draft" of everything easier.

Jeffrey Blackburn: You should be the ones in the AI ​​industry and even the world who can see this clearly. How technology will impact companies in different industries. It has applications in finance, content, media and medical fields. Looking to the future, which industries do you think will undergo huge changes because of AI and your work at openai?

Mira Murati: This is very much like the question entrepreneurs asked us when we first started building products based on gpt-3. People ask me, what can I do with it? what's it for? I would say, anything, you'll know once you try it. I think it will affect all industries, and there is no area that will not be affected, at least not in terms of knowledge work and knowledge labor. Maybe it will take a little time to get into the physical world, but I think everything will be affected.

Now we are seeing a lag in the application of AI in high-risk areas such as healthcare or law. This is also very reasonable. First, you need to understand and use it in low-risk, medium-risk use cases and make sure those use cases are handled safely, and then you can apply it to high-risk things. There should be more human supervision initially, then gradually switching to higher levels of human-machine collaboration.

Jeffrey Blackburn: What are some use cases that are emerging, coming soon, or that you personally prefer?

Meera Murati: My favorite use case of so far is that AI makes the first step in everything we do so much easier with , whether it's creating a new design, code, Articles, emails, it’s become easier.

The "first draft" of everything is made easier with , which lowers the barriers to doing something and allows people to focus on the more creative, difficult parts. Especially in terms of code, you can outsource a lot of tedious work to AI, such as documentation. On the industry side, we've seen a lot of applications. Customer service is definitely an important application area for AI chatbots. The

analysis class also works, because now we have connected many tools to the core model, which makes the model easier to use and more efficient. We now have code analysis tools that can analyze large amounts of data, can dump all kinds of data into it, and it can help you analyze and filter the data. You can use image generation tools, you can use browsing tools.If you are preparing a thesis, AI can allow your research work to be completed faster and more rigorously.

I think this is the next level of model productivity advancement - adding tools to the core models and letting them deeply integrate with . The model determines when to use analysis tools, search, or coding tools.

7. Some creative jobs should never exist, and models will become great creative tools

Jeffrey Blackburn: As models gradually watch all the TV shows and movies in the world, will it start writing scripts and producing Movie?

Mira Murati: These models are a tool, and as a tool it can certainly accomplish these tasks. I expect we can work with these models to expand the boundaries of our creativity.

And how do humans view creativity? We think of it as something very special, accessible only to a talented few. and these tools actually lower the threshold of creativity and improve people's creativity . So in that sense, I think models can be great creative tools.

but I think it's really going to be a collaborative tool, especially in creative fields. More people will become more creative. Some creative efforts may disappear, but if the content it produces isn't of high quality, maybe they shouldn't be there in the first place. I really believe that AI can be a tool for education and creativity, and it will enhance our intelligence, creativity and imagination.

Jeffrey Blackburn: People used to think that things like CGI were going to destroy the film industry, and they were very scared. The influence of AI is definitely greater than CGI, but I hope you are right about AI.

8. The impact of AI on work is still unknown, but the economic transformation is unstoppable

Jeffrey Blackburn: People are worried that many jobs may be at risk of being replaced by AI. What impact does AI have on people’s work? Can you talk about this issue as a whole? Should people really be worried about this? Which types of jobs are more dangerous, and how do you see this all developing?

Mira Murati: The fact is, we don’t really understand what impact AI will have on employment. 's first step is to help people understand what these systems can do, integrate them into their workflows, and then start predicting and anticipating the impact. And I think people don't realize that these tools are used a lot, and there's not enough research in this area.

Therefore, we should study how the current nature of work and the nature of education have changed, which will help us predict how to prepare for the improvement of these AI capabilities. Specifically, I'm not an economist, but I do expect that there will be a lot of job changes, some jobs will disappear, some job opportunities will appear, we don't know exactly what that will look like, but I can imagine a lot of strictly speaking Repetitive tasks will disappear. People don't grow any while doing these jobs.

Jeffrey Blackburn: Do you think enough other jobs will be created to compensate for the jobs that disappear?

Meera Murati: I think there will be a lot of jobs created, but how many jobs will be created, how many jobs will be changed, how many jobs will be lost, I don’t know. I don't think anyone really knows now, because the issue of has not been carefully studied, but it really needs to be paid attention to.

But I think the economy is going to transform and these tools are going to create a lot of value, so the question is how to leverage that value. If the nature of work does change, then how do we distribute economic value into society. Is it through public benefits? Is it through universal basic income (ubi)? Is it through some other new system? There are many problems with that need to be explored and solved.

9. AI can achieve universal access to higher education and provide customized learning services

Jeffrey Blackburn: Maybe higher education plays an important role in the work you describe.What role do you think higher education will play in the future and evolution of AI?

Meera Murati: I think it’s important to figure out how to use AI tools to advance education. Because of I think one of the most powerful applications of AI will be in education , enhancing our creativity and knowledge. We have the opportunity to use AI to build ultra-high quality education that is easily accessible, ideally free to anyone in the world, can be presented in any language, and reflect the nuances of cultures.

With AI, we can provide customized education to anyone in the world. Of course, at an institution like Dartmouth, the classrooms are smaller and students get a lot of attention. But even here, one-on-one tutoring is hard to come by, let alone anywhere else in the world.

Actually we don't spend enough time learning how to study, and this usually happens much later in life, such as in college. But the skill to learn is a very basic thing. If you don't master this skill, you will waste a lot of time. With AI, the curriculum, curriculum, question sets, everything can be customized to the student's own learning style.

Jeffrey Blackburn: So you think that even at a place like Dartmouth, AI can supplement education?

Meera Murati: absolutely, yes.

10. User feedback has formed the basic value system of the system, and a system with highly customized values ​​is being developed.

Jeffrey Blackburn: How about we start the audience question session?

Mira Murati: OK, no problem.

Audience 1: As one of the first computer scientists at Dartmouth, John Kemeny once gave a lecture about how every computer program built by humans has human values ​​embedded in it, whether intentionally or unintentionally.

What human values ​​do you think are embedded in gpt products? Or in other words, how should we embed values ​​such as respect, fairness, justice, honesty, integrity, etc. into these tools?

Mira Murati: This is a good question and a very difficult question. We have been thinking about these questions for a long time. Many values ​​​​in the current system are basically embedded through data, that is, data on the Internet, permission data, and annotated data by human annotators. Each has specific values, so it's a collection of values ​​that matters. Once we put this product out into the world, we have the opportunity to get a broader set of values ​​into the hands of more people.

Now we make the most powerful system available to people for free at chatgpt, used by over 100 million people around the world. These people can all provide feedback to chatgpt. If they allow us to use the data, we will use it to create this aggregated value to make the system better and more in line with people's expectations.

but this is the default underlying system. What we really want is to also have a custom layer on top of the underlying system so that each group can have their own values. For example, a school, a church, a country, or even a state. Based on this default system with basic human values, they provide their own more specific and precise values ​​and establish their own system.

We are also working on how to do this. But this is a really difficult question. Because we have our own problems, because we can’t agree on everything. Then there are technical issues to solve. On technical issues, I think we've made a lot of progress. We use methods like human feedback reinforcement learning to let people feed their values ​​into the system. We have just released the model specification guide spec, which provides greater transparency into the values ​​in the system. We are also establishing a feedback mechanism where we collect input and data to evaluate the progress of the spec. You can think of it as the constitution of the AI ​​system.

But this "Constitution" is constantly changing, as our values ​​evolve over time. it will become more precise. This is an issue we are focusing on. Right now we are thinking about basic values, but as systems become more complex we will have to think about more nuanced values.

Jeffrey Blackburn: Can you prevent it from getting angry?

Mira Murati: No, that should actually be up to you. As a user, if you want an angry chatbot, you can have a chatbot like this.

11. The problem of sky sound was not found in the red team exercise, and they are studying how to provide compensation to creators

Audience 2: I am really curious, how do you consider copyright issues and biometric rights (such as voiceprints, fingerprints etc.). You mentioned earlier that some creative jobs may no longer exist, and many people working in creative industries are thinking about licensing and compensation for data use. Because whether it is a proprietary model or an open source model, the data is obtained from the Internet. I'd really like to know your thoughts on licensing and compensation as it relates to copyright issues.

Another thing is biometric rights, such as rights in voice, portrait, etc. OpenAI has recently experienced controversy about Sky Voice, and this election year is also threatened by deep fake information. What do you think of these issues?

Mira Murati: Okay, I’ll start from the last part. We did a lot of research on voice technologies and didn't release them until recently because they presented a lot of risks and issues.

But it is equally important to ensure social acceptance, to set up protective measures and control risks while giving people permission to use it, and to allow other moderators to research and make progress.

For example, we are working with institutions to help us think about how AI interacts with humans. The model now has sound and video, which are very emotionally resonant modalities. We need to understand how these things will develop and prepare for these situations.

In this case, Sky's voice is not Scarlett Johansson's, nor is it intended to be hers. I was in charge of selecting the voices, and our CEO was in conversation with Scarlett Johansson. The two processes of are completely parallel and do not interfere with each other.

But we took it down out of respect for her. Some people have heard some similarities, but these things are very subjective.

I think this type of problem can be dealt with through red team exercises (generally referring to actual network offensive and defensive exercises). If the voice is considered to be very similar to a well-known public voice, then we will not choose this voice.

However, in our red team exercise, this problem did not appear. This is why it’s important to conduct a more extensive red team exercise to catch these issues ahead of time.

But our overall strategy around biometrics is to initially only provide access to a small number of people, like experts or red team members, and let them help us get a good understanding of the risks and capabilities, and we build solutions accordingly.

As we become more confident about these measures, we will provide access to more people. We are not allowing people to use this technology to make their own sounds because we are still researching the risks of this and we are not yet confident that we can handle abuse in that area.

But we are quite satisfied with the current security measures of several voices in chatgpt, which can better prevent abuse problems. We started with a small-scale test, essentially an extended red team exercise. Then as we scale to 1,000 users, our alpha version will work closely with those users to gather feedback and understand edge cases so that we can be prepared for those situations as we scale to 100,000 people. Then 1 million people, then 100 million people, and so on. But this is all done under strict control, this is what we call iterative deployment.

If we don't feel these use cases are secure enough, then we won't release them to users.Or we will limit the functionality of the product in some way for these specific use cases, because capabilities and risks coexist.

But we're also doing a lot of research to help us deal with issues of content provenance and content authenticity, giving people the tools to understand whether something is a deepfake, or is disinformation and so on.

Since the early days of OpenAI, we have been studying the problem of disinformation. We have built many tools, such as watermarks, content policies, etc., that allow us to manage disinformation. Especially this year, given that it's a global election year, we've stepped up that work even more.

However this is an extremely challenging area and we as technology and product manufacturers need to do a lot of work, but we also need to work with people, society, media and content producers to figure out how to solve these problems. When

develops technologies such as voice and sora, we first cooperate with members of the red team to study risks together. The next step is to study this issue with content creators to understand how this technology can help them and how to build a product that is safe, useful, and can truly promote social progress. We have done similar research on both dall·e and sora.

The issues of compensation and licensing are important and challenging. we work a lot with media companies and give people a lot of control over how their data is used in their products. If they don’t want their data to be used to improve the model, or for us to do any research or training, that’s totally fine. We do not use this data.

And then for the general creator community, we get them into using these tools early so that we can hear from them first about how they want to use it and build the most useful product based on that information.

Also, these are research previews, so we don't need to spend a fortune building the product. We will invest heavily in development only if we are sure that the technology will be of great use.

We are also trying to create tools that allow people to be compensated for their data contributions. This is very tricky from a technical perspective and difficult to build such a product, because you have to figure out how much value a specific amount of data creates in the trained model.

It is difficult to estimate how much value individual data actually creates. But if we could create an aggregated data pool and let people contribute data to it, such measurement might be easier.

So for the past two years we've been trying these approaches but haven't actually deployed anything yet. We have experimented with the technology and made some progress, but it is a very difficult problem.

12. If you want to decompress yourself if you go back to your student days, it is important to expand your knowledge.

Audience 3: My question is quite simple. If you were going back to college now, back to Dartmouth, what would you do, what wouldn't you do again? What will you major in, or will you be more involved in something?

Meera Murati: I thought I would be learning the same stuff but maybe with less pressure. I think I'll still take math, but also take more computer science courses. But I will be less stressed and learn in a more curious and happy way. It's definitely more productive.

When I was a student, I was always a little stressed and worried about what would happen next. Everyone would tell me not to be stressed, but somehow I was always stressed. When I talk to my seniors, they always say that they should enjoy studying, devote themselves to it, and be less stressed.

I think that in terms of course selection, it is best to study more subjects and have a little understanding of everything. I found this to be a good thing while in school and after graduation. Even though I am now working in a research institute, I am always learning and never stop. It's good to have an understanding of everything.

Jeffrey Blackburn: Thank you so much because I know you have a stressful life.Thank you for being here today and thank you for the very important and valuable work you do for society. Thanks to everyone at Dartmouth. Advice for students just to bring this conversation to a close, I want to thank all of you again for coming and enjoy the rest of your graduation weekend. thank you all.

Source: Dartmouth College