According to news on May 3, OpenAI co-founder and CEO Sam Altman (Sam Altman) recently appeared at three top American universities, Stanford University, Harvard University and Massachusetts Institute of Technology (MIT), and has done a lot content of the conversation. In a speech

entertainment 9399℃

According to news on May 3, OpenAI co-founder and CEO Sam Altman (Sam Altman) recently appeared at three top American universities, Stanford University, Harvard University and Massachusetts Institute of Technology (MIT), and has done a lot content of the conversation. In a speech - Lujuba

html According to news on May 3, Openai co-founder and CEO Sam Altman (Sam Altman) recently appeared at three top American universities, Stanford University, Harvard University and Massachusetts Institute of Technology (MIT), and has done a lot content of the conversation.

In a speech at Harvard University this morning, Altman admitted that the mysterious gpt2-chatbot is indeed related to openai, but not gpt-4.5.

According to news on May 3, OpenAI co-founder and CEO Sam Altman (Sam Altman) recently appeared at three top American universities, Stanford University, Harvard University and Massachusetts Institute of Technology (MIT), and has done a lot content of the conversation. In a speech - Lujuba

He noted that OpenAI can make progress on the behavior and functionality of all models simultaneously.

"I think it's a miracle. Every college student should learn to train gpt-2... It's not the most important thing, but I bet in two years it's something every Harvard freshman will have to do." Alter Mann said.

Altman said in his MIT speech that agent (agent/agent) will be the killer application of AI. "Like a super-competent coworker who knows everything about my life, every email I have, every conversation I have."

Altman believes that chatgpt has the potential to "exponentially increase productivity," like a calculator To free users from manually performing calculations, he called the app a "word calculator."

But he also warned, "People should not think that using chatgpt will prepare people for the future world." He believes that chatgpt and other AI agent applications can play a huge role in various fields.

In an earlier event at the Stanford University Entrepreneurial Thought Leaders Lecture, Altman said that judging from the current technological innovation and functional iteration, humans are far from reaching the limits of AI. If only Focusing on current AI capabilities will be futile. What's next for

According to news on May 3, OpenAI co-founder and CEO Sam Altman (Sam Altman) recently appeared at three top American universities, Stanford University, Harvard University and Massachusetts Institute of Technology (MIT), and has done a lot content of the conversation. In a speech - Lujuba

chatgpt?

A year ago, when openai released the gpt-4 model, the world went crazy, thinking that it would completely change the working model of many industries. Now when we look back at gpt-4, we often sarcastically say, "It looks so stupid. When will gpt-5 be released?"

In response, Altman calmly said "yes" with a smile on his face. Smile and say nothing else.

"GPT-4 will be the stupidest model any of you will have to use again. I don't care if we burn $50 billion a year, we are building AGI and it will be worth it." Altman said.

Altman emphasized that on the road to general artificial intelligence (AGI), chatgpt's current performance is still not outstanding, and gpt-4 may be the stupidest model any of you will have to use again in the future. To solve this problem, it is important to roll out new models early and frequently, a.k.a. iterative deployment.

Altman pointed out the nature of AI development: "We can be highly certain that gpt-5 will be much smarter than gpt-4, gpt-6 will be much smarter than gpt-5, and we are not close to this curve yet. Top. Generally speaking, the new generation of products will be smarter, but I think the importance of this sentence is still underestimated. "

As for the development focus of AI, Altman said that the ultimate goal of openai has never changed. It’s implementation—agi.

"Open source is not the best approach"

Altman said that open source means giving up proprietary control of technology. For openai, they have invested a lot of manpower and computing resources in developing products that need to obtain huge commercial returns. Then use these funds to innovate and iteratively release smarter products. On this point, Elon Musk (elon reeve musk) reached a consensus with Altman and other founders when he was still working on OpenAI. They believed that it would be impossible to build AGI without burning billions of dollars every year.

Altman also said in this discussion that in order to realize AGI, it does not matter whether it costs 500 million, 5 billion or 50 billion US dollars per year, as long as it can make some contributions to all mankind and the AI ​​field. But this requires healthy funding sources, and relying solely on donations and financing from others is not enough.

This is also one of the fundamental reasons why openai switched from the original open source strategy to closed source. Now, many organizations and individual developers can easily reproduce the capabilities of gpt-4, and even exceed it in individual unit tests.But the core capability of OpenAI is technological change, which is the next paradigm shift that can truly define AI capabilities, just like the disruptive impact of Apple's iPhone on the mobile field.

For example, on February 15 this year, openai released the global sensation Wensheng video model sora, which may redefine the film and television, game development, advertising and marketing and other industries, and its influence is even more powerful than chatgpt back then.

Altman said that openai is not afraid of others copying or copying their products, because in the field of generative AI, openai will always be one of the industry leaders, lighting up a guiding light for countless entrepreneurs and developers. .

When talking about the opportunity to start a business, Altman pointed out that he believed that he came of age at the luckiest moment in the past few centuries.

"I have a clear understanding of how much the world is going to change and the opportunities to impact it. Starting a business, doing AI research, anything is pretty amazing. I think this is probably the best time to start a business. I guess I would say this, I I think this may be the best time to start a business in the Internet era, or in the history of technology. I think you can do more amazing things with AI than you did a year ago. The greatest and most influential companies are born at this moment. New products are born at times like this. I will feel extremely lucky, and I will be determined to make the most of it, and I will understand where I want to contribute and go do it."

Altman also shared with everyone that it is important to stick to your own entrepreneurial ideas.

He said: "I think we should learn to believe in ourselves, put forward our own ideas, and do non-consensus things, just like when we founded openai. Openai was non-consensus at the beginning, but now it has been recognized by the world. Now I There are only obvious ideas, because I'm like stuck in a framework, but I'm sure you people here will have other different ideas."

Not long ago, openai announced that chatgpt can be used for free without registration. At the time, the company said, OpenAI's driving force on the road to realizing difficult AGI is to benefit all mankind.

Regarding the progress plan to achieve this goal, Altman emphasized: "I have given up on giving a timetable for achieving AGI, but I think that in the next many years, we will launch a more powerful system every year."

About the future of artificial intelligence hardware

Altman dropped out of Stanford University in 2005 and later became famous for leading the development of openai, the AI ​​R&D team behind chatgpt and dall-e. OpenAI was founded in 2015 as a non-profit research laboratory with a mission to "ensure that AGI can benefit all mankind."

Now there is news that Altman will work with Apple designers to create future AI hardware devices.

In response, Altman said during the discussion, "I don't think it would require a new piece of hardware," adding that the types of applications envisioned could exist in the cloud.

Altman pointed out that AI hardware devices are indeed exciting, but he himself may not be suitable to accept this challenge, "I am very interested in new technology consumer hardware, but I am a hobbyist, which is inconsistent with my The expertise is far apart.”

Regarding the topic of copyright of model data, Altman is optimistic that the issue will not last long, although he did not elaborate.

"I do, but I'm not sure we'll ever figure out a way to figure it out, you always need more and more training data. Humans are the proof that there are other ways of 'cultivating intelligence'. Hopefully We can find it," Altman said.

Today's openai is no longer what it used to be. It has long since become a leader in generative AI from an unknown small company. NVIDIA has become a technology giant with over US$2 trillion by leveraging the shareholder wind of chatgpt. Altman hopes the bond between the two will continue to endure.

When discussing the future of AI, Altman emphasized: “I think resilience and adaptability will be more important in the next few decades than they have been in a long time."

The following is the complete video with bilingual subtitles and part of the content:

Moderator: If you could use three words to describe your feelings as a Stanford undergraduate, what three words would you use?

Altman: Excited , optimistic, curious.

Moderator: What three words would you use to describe now?

Altman: I guess it’s the same.

Moderator: It’s great despite what has happened in the past 19 years. Many changes, but they may pale in comparison to the changes that are coming in the next 19 years. So I have a question for you: What if you woke up tomorrow and suddenly found yourself 19 years old again, with all the knowledge you have now. , what would you do? Will you be happy?

Altman: I will feel that I am in a very historical moment, and the world is undergoing great changes. At the same time, I also see that participating in it has profound consequences. Impact opportunities, such as starting a business, engaging in AI research, etc.

I think now is the best time to start a company since the Internet era, and maybe even in the history of technology. With the advancement of AI, more miracles and greatness will be produced every year. The company was born at this moment, and the most influential products were born at this moment. Therefore, I feel extremely lucky and determined to make the most of this opportunity. I will clarify the direction of my contribution and put it into practice. Person: Do you have a preference for the field in which you will contribute? If so, what major will you major in? I will not continue to be a student, but Just because I haven't done it before, and I think it's reasonable to assume that people might make the same decision that they did again, I think it's just, it might not be. I want it.

Moderator: What would you do?

Altman: I guess it's not surprising because people usually do what they want. I would go into artificial intelligence research.

Moderator: Where might you do it? Academia or in private industry?

Altman: I think, obviously, I'm biased towards open applications, but I think No matter where I can do meaningful AI research, I will be very excited, but I have to say sadly that the reality is that I will choose to enter the industry. I think it really needs to be in a place where computing resources are extremely abundant.

Moderator: We had Qasar Younis last week and he was a strong advocate of not becoming a founder but joining an existing company to learn relevant skills. What advice would you give to students who are wondering whether they should start their own business at the age of 19 or 20, or join other startups?

Altman: Since he gave the reasons for joining other companies, let me talk about another point of view. I think there’s a lot to learn from starting your own company. If this is what you want to do, Paul Graham has a great saying that I think is very true: Entrepreneurship doesn’t have a premed level like medicine, you only learn how to manage a company by actually running a startup. If you're convinced that this is what you want to do, then you should probably just dive right in and do it.

Moderator: If someone wants to start a company and engage in the AI ​​field, what do you think are the short-term challenges in the AI ​​field that are most suitable for entrepreneurship? To be clear about this, I mean what problems do you think need to be solved as a priority but that OpenAI won't be able to solve in the next three years?

Altman: In a sense, this question is very reasonable, but I won't answer it directly. Because I think you should never take this kind of advice from anyone on how to start a business.

When an area is so obvious that I or anyone else could point it out from here, it's probably not a good place to start a business.I totally get it, and I remember asking people, "What kind of company should I start?"

But I think one of the most important principles of having an impactful career is that you have to make your own path. If you're thinking about something that other people would do, or that a lot of people would do, then you should be a little skeptical.

I think an important ability we need to develop is coming up with non-obvious ideas. I don’t know what the most important thought is right now, but I’m sure someone in this room knows the answer. I think it’s important to learn to trust yourself, come up with your own ideas and be brave enough to do things that aren’t widely recognized.

For example, when we first started openai, this matter was not recognized by many people, but now it has become a very obvious thing. Now I only have clear ideas about this direction because I am in it, but I believe you will have other opinions.

Moderator: Let’s put it another way. I don’t know if it’s fair to ask this. What’s an issue you’re thinking about that no one else is talking about?

Altman: How to build a truly large computer. I think other people are talking about this too, but we're probably looking at it from a perspective that others can't imagine. The problem we are trying to solve is to develop intelligence not only at the elementary or middle school level, but also at the PhD level and beyond, and apply it to products in the best way to maximize the positive impact on society and people's lives. . We don’t know the answer yet, but I think it’s an important question to figure out.

Moderator: If we continue with the question of how to build a mainframe computer, can you share your vision? I know there's been a lot of speculation and rumors about the semiconductor foundry projects you're working on. How does this vision differ from current practices?

Altman: foundry is just part of it. We increasingly believe that AI infrastructure will become one of the most important investments in the future, a resource that everyone will need, including energy, data centers, chips, chip design and new networks. We need to look at the entire ecosystem holistically and try to do more in these areas. Focusing on just one part won’t work, we have to think about it all.

I think this is the trajectory of the history of human technological development: constantly building larger and more complex systems.

Moderator: As for the computational cost, I heard that it cost 100 million US dollars to train the chatgpt model, and its parameter size is 175 billion. The cost of gpt-4 is US$400 million, and the number of parameters is 10 times that of the former. The cost has increased almost 4 times, but the number of parameters has increased 10 times. Please correct me if I'm wrong.

Altman: I know, but I want to...

Moderator: Okay. Even if you don't want to correct the actual numbers, if that's the right direction, do you think the cost per update will continue to grow?

Altman: yes.

host: Will the growth of increase exponentially?

Altman : Probably.

Moderator: So the question becomes, how do we raise money for this?

Altman: I think it's very valuable to give people really powerful tools and let them explore for themselves how they can use these tools to build the future. I would very much like to trust your creativity and the creativity of others around the world to find a way to deal with this problem. So, there may be people in OpenAI who are more business-minded than me who are worried about how much we're spending, but I don't really care.

Moderator: openai, chatgpt and all the other models are very good and burned $520 million last year. Doesn't that make you worried about its business model? Where are the sources of profit?

Altman: First of all, thank you for saying that, but chatgpt is far from outstanding, at most it is barely qualified. gpt-4 is the stupidest model that everyone may use in the future.However, it’s important to start early and keep releasing, we believe in iterative releases.

If we develop general artificial intelligence in our basements and the world goes on blindly without realizing it, I don't think that will make us good neighbors. So, given our view of the future, I feel it's important to express our perspective.

However, what is more important is to put the product into the hands of users and let society and technology evolve together. Let society tell us what we collectively and individually want from technology and how to productize it for ease of use. Where this model works well and where it works poorly gives our leaders and institutions time to react. Give people time to integrate it into their lives and learn to use this tool.

Some people may use it to cheat, but some people may also use it to do amazing things. Development expands with each generation, which means we release products that aren't perfect, but there's a very tight feedback loop where we learn and get better.

It kind of sucks to release a product that embarrasses you, but it's a better approach than the alternative. In this particular case, we really should be releasing iteratively to the community.

We learned that ai and surprise are incompatible. People don’t want to be spooked, they want to progress incrementally and have the ability to influence these systems. This is how we do it.

There may be situations in the future where we think iterative releases are not a good strategy, but this seems to be the best approach for now. I think we've learned a lot by doing this. Hopefully the wider world will benefit as well.

Whether we spend $500 million, $5 billion or $50 billion a year, I don't care. As long as we can continue to create more social value than that and find a way to pay the bills. We are developing an AGI which will be expensive but definitely worth it.

Moderator: So, do you have a vision for 2030? If it were 2030, you did it. What does the world look like to you?

Altman: may not be much different in some very important aspects.

We will be back here again, with a new batch of students. We talk about how important startups are and how cool technology is. We will have new great tools in the world.

It would be awesome if we could teleport today to six years ago and have this thing that is smarter than humans in many subjects, able to complete these complex tasks for us. You know, we can write complex programs and do this research or start this business.

However, the sun still rises in the east and sets in the west, people continue to play out their human drama, and life goes on. So, in one sense it's very different because we now have abundant intelligence, but in another sense it's not that different.

Moderator: You mentioned general artificial intelligence. In a previous conversation, you defined it as software that simulates the performance of an average human on a variety of tasks. When do you think this goal will be achieved? Can you give an approximate time or range?

Altman : I think we need to have a more precise definition of agi to address the timing issue. Because at this point, even the definition you just gave makes sense, and that's your definition.

Moderator: I was repeating something you said earlier in the conversation.

Altman: I want to criticize myself. This definition is too broad and easily misunderstood.

So I think the criteria that are really useful or satisfying for people is this: when people ask "what's the timeline for AGI", what they actually want to know is when will the world change dramatically, and when will the rate of change accelerate significantly? , when will the way the economy works change dramatically, and when will my life change. This timing may be very different than we imagine for many reasons.

I can definitely imagine a world where we can develop PhD-level intelligence in any field, which can greatly increase the productivity of researchers and even enable some independent research. In a sense, this sounds like it will have a big impact on the world, but it is also possible that we have done this only to find that global GDP growth has not changed in subsequent years. It’s still strange to think about this situation. This wasn’t initially my intuition about the whole process.

So, I can't give a specific time frame as to when we'll reach the milestone that people care about, but in the next year and every year after that, we're going to have a much more robust system than we have now, I think. This is the key. So, I've given up on predicting agi's timeline.

Moderator: Can you talk about your views on the dangers of AGI? Specifically, do you think the biggest danger to AGI will come from a catastrophic event that sensationalizes the major media, or something more hidden and harmful, just like how everyone's attention is severely distracted due to the use of TikTok. Or neither?

Altman: I am more concerned about hidden dangers because we are more likely to ignore them.

Many people are talking about catastrophic dangers and are wary of them. I don’t want to trivialize these dangers, I think they are serious and real. But at least we know to pay attention to this and spend a lot of energy on it. Just like the example you mentioned of people's attention being severely distracted by using TikTok, we don't need to pay attention to the final result. That's a really tough question, those unknowns are really hard to predict, so I worry about those more, even though I worry about both.

Moderator: Will be an unknown factor? Can you name a factor that particularly worries you?

Altman: Well, then they will be classified as unknown factors.

Although I think there will be less changes in the short term than we think, like other major technologies. But in the long term, I think the changes will be more than we expect. I worry about how quickly society adapts to this new thing, and how long it takes us to find a new social contract versus how long it takes us to do it.

Moderator: With things changing rapidly, we are trying to make resilience one of the core contents of the course, and the cornerstone of resilience is self-awareness. So, I'm wondering if you were aware of your own motivations as you embarked on this journey.

Altman: First of all, I believe that resilience can be taught and that resilience has always been one of the most important life skills. Resilience and adaptability are going to be even more important in the coming decades, so I think that's a good point. As for the issue of self-awareness, I think I am self-aware, but just like everyone thinks they are self-aware, it is difficult to judge from my own perspective whether I really have it.

Host: Can I ask you a question we often ask in our introductory self-awareness courses?

Altman: of course.

Moderator: This is like Peter Drucker’s framework. Sam, what do you think is your greatest advantage?

Altman: I don't think I'm particularly good at many things, but I'm also pretty good at a lot of things. And I think a broad range of skills is undervalued in this world. Everyone is overspecializing, so if you're good at a lot of things, you can find connections among them. I think that way you can come up with ideas that are different from other people's, rather than just being an expert in a certain field.

Moderator: What is your most dangerous weakness?

Altman: The most dangerous weakness of , this is an interesting thought. I tend to be pro-tech, probably because I'm curious to see where technology is going, and I believe that technology, in general, is a good thing.

I think this view of the world is generally beneficial to me and others, so I get a lot of positive feedback.However, this isn’t always true, and when it isn’t, it can have very bad consequences for many people.

Moderator : Harvard psychologist David McClelland proposed a framework that all leaders are driven by one of three primal needs: the need to belong, the need to be liked; the need to achieve; and the need for power . If you had to rank them, how would you rank them?

Altman: I have had these needs at different times in my career. I think people go through different stages. And right now, I think what drives me is wanting to do something meaningful and interesting. I've definitely been through phases of pursuing money, power, and status before.

Moderator: What are you most excited about for the upcoming chatgpt-5?

I don't know yet, this answer sounds a bit perfunctory. But I think the most important thing about gpt-5 or whatever we call it is that it's going to be smarter.

sounds like a cop-out, but I think this is one of the most remarkable facts in human history: we can do something about it and can now say with a high degree of scientific certainty that gpt-5 will be smarter than gpt-4 More, gpt-6 will be much smarter than gpt-5. We haven't reached the top of this curve yet, and we roughly know what to do. It’s not going to get better in just one area, it’s not always going to be better on this assessment, this subject, or this format, but it’s going to get smarter overall. I think the significance of this fact remains underestimated.

Audience Q&A session

Finally, we also excerpted some exciting content from the audience Q&A session.

Question 1: As you get closer to AGI, how do you plan to deploy it responsibly to prevent inhibiting human innovation and continue to promote innovation?

Altman: I am not worried that AGI will inhibit human innovation. I truly believe that people will do better things with better tools. History shows that if you give people more leverage, they can do more amazing things. This is a good thing for all of us.

But I do worry more and more about how to do it all responsibly. As models become more powerful, the standards we face will become higher and higher. We already do a lot of things like red team testing and external audits. These are all good. But I think as models become more powerful, we need to deploy them more incrementally and maintain a tighter feedback loop on how they are used and where they are effective.

We used to be able to release a major model update every few years, but now we may need to find ways to increase the granularity of deployment and iteratively deploy more frequently. Exactly how to do this is less clear, but it will be key to responsible deployment.

Additionally, the way we have all stakeholders negotiate AI rules will become increasingly complex over time.

question 2: You mentioned before that every year we will have more powerful AI systems. Many parts of the world don't have the infrastructure to build these data centers or mainframe computers. How will global innovation be affected?

Altman: Regarding this issue, I would like to talk about it in two parts.

First, I think equitable global access to computers for training and inference is extremely important, regardless of where they are built. One of the core parts of our mission is to make chatgpt available to as many people as possible who want to use it, where we may not be able or for good reasons don't want to operate. How we think about making training computing more available to the world will become increasingly important. I do think we're going to move into a world where we think it's a human right to have access to a certain amount of computing power. We have to figure out how to distribute this power to people around the world.

However, there is a second point, which is that I think countries will become increasingly aware of the importance of having their own AI infrastructure. We wanted to find a way, and we're now spending a lot of time traveling around the world helping a lot of countries that want to build these facilities.I hope we can play some small role in that.

Question 3: What role do you think artificial intelligence will play in future space exploration or colonization?

Altman: I think space is obviously not friendly to biological life. So it seems easier if we can send bots.

Question 4: How do you know a point of view is non-consensus? How to verify whether your idea has gained consensus in the scientific and technological community?

Altman: First of all, what you really want is to get it right. Holding an opposing view but being wrong is still wrong.

If you predicted 17 of the past two recessions, you might just be contrarian on the two that you were right. This may not necessarily be the case. But you were wrong the other 15 times. So I think it's easy to get too excited about being a contrarian. Again, the most important thing is to get it right. The crowd is usually right. But value is greatest when you hold opposing views and are right at the same time, and that doesn’t always happen in an either/or way. Like everyone here can probably agree that AI is the right field to start a business in. If one person in the room figures out the right company to start, and then executes it successfully, and everyone else thinks that's not the best thing you can do, that's all that matters.

As for how to do this, I think it's important to build the right peer group around yourself, and it's also important to find original thinkers. But you kind of have to do it alone, or at least part of it alone, or with a couple of people who are going to be your co-founders or whatever.

I think once you get too deep into the question of how to find the right peer group, you're already in the wrong frame. Learn to trust yourself and your own intuition and your own thought process, which will become much easier over time. No matter what they say, I don't think anyone is really good at it when they start out. Because you haven't built muscle yet, the social pressures and evolutionary pressures on you are working against that. Therefore, as time goes by, you will get better and better, don't ask too much of yourself too early.

Question 5: I'd love to know your thoughts on how energy demand will change over the next few decades and how we get to a future where renewables are 1 cent per kWh.

Altman: That day may come, but I'm not sure... My guess is that eventually nuclear fusion will dominate the production of electricity on Earth. I think it will be the cheapest, most abundant, most reliable, most energy dense energy source. I could be wrong about this, or it could be solar plus storage. You know, my best guess is that it's probably going to end up being one of those two ways, and there will be some cases where one is better than the other, but these look like really global scale, per- Two main options for energy costs of one cent per kilowatt hour.

asked 6: What did learn from what happened at openai last year, and what made you come back?

Altman: One of the best lessons I've learned is that we have an amazing team that is more than capable of running the company without me, and they really do without me Operated for a few days. As we progress toward artificial general intelligence (AGI), some crazy things may happen, and there may even be more crazy things happening among us. Because different parts of the world are increasingly responding to our emotions, the stakes are rising. I used to think that under a lot of pressure teams would do well, but you never really know until you have a chance to experiment. We had the opportunity to run this experiment, and I learned that the team was very resilient and ready to run the company to some extent.

As for why I came back, you know, initially when the board called me the next morning and asked me if I would consider coming back, I said no and I was angry. Then, I thought about it, and I realized how much I love OpenAI, how much I love these people, the culture we've built, and our mission.I kind of want to do this with everyone.

Question 7: Can talk about the structure of Russian matryoshka dolls like openai?

Altman: This structure was gradually formed. If we could do it over again, this would not be the solution I would choose. But when we started, we didn't expect to have a product. We're just going to be an artificial intelligence research lab. We don't even know, we don't have any idea about language model, API or chatgpt.

So, if you're going to start a company, you have to have some theory that you're going to sell a product someday, and we didn't think that way at the time. We didn't realize we were going to need so much money for computing, and we didn't realize we were going to have such a great business. When openai was founded, it was only intended to promote artificial intelligence research.

Question 8: Does it scare you that creates something smarter than humans?

Altman: certainly scares me. Humans are getting smarter and more capable over time. You can do more than your grandparents did, not because individuals eat better or get more health care, but because society's infrastructural advances, such as the Internet and iPhones, have put a wealth of knowledge at your fingertips.

society is an AGI system, which cannot be controlled by one person's brain. It is built brick by brick by everyone to create higher achievements for those who come after you. Your children will have tools that you do not have.

This is always a little scary. But I think there's a lot more good than bad. People in the future will be able to use these new tools to solve more problems,

(This article was first published on Titanium Media app, author | Lin Zhijia, editor | Hu Runfeng)

Tags: entertainment