Image source @通义千文文 | Silicon Valley 101 You may still remember the shocking "Operation Guzheng" in the "Three-Body" TV series. The production team almost perfectly restored the nano flying blade cutting the giant ship on Judgment Day in the original work. plot. So, how was this

entertainment 1122℃

Image source @通义千文文 | Silicon Valley 101 You may still remember the shocking 'Operation Guzheng' in the 'Three-Body' TV series. The production team almost perfectly restored the nano flying blade cutting the giant ship on Judgment Day in the original work. plot. So, how was this  - Lujuba

Picture source @通义千文

Text | Silicon Valley 101

You may still remember the shocking "Operation Guzheng" in the TV series "Three-Body Problem". The production team almost perfectly restored the nano flying blade cutting judgment day in the original work The plot of the giant wheel. So, how was this scene created?

In recent years, the level of special effects in domestic film and television works has been greatly improved. The application of digital special effects technology has opened a new door to enrich the visual effects of film and television works and enhance the expression space of the works. Nowadays, with the rapid development of generative AI technology, the production efficiency and production capabilities of the film and television special effects industry are being greatly improved. 3D hardware products with spatial computing such as Apple Vision Pro are also making the special effects industry go through a new round. Transformation into new media.

Image source @通义千文文 | Silicon Valley 101 You may still remember the shocking 'Operation Guzheng' in the 'Three-Body' TV series. The production team almost perfectly restored the nano flying blade cutting the giant ship on Judgment Day in the original work. plot. So, how was this  - Lujuba

"The Three-Body Problem" guzheng action clip, the picture is taken from the "Three-Body TV Series"

In this episode, we invited China's famous visual director, film and television producer, and the visual director of "The Three-Body Problem" Lu Beike Reveal for us the story behind the special effects production of the "Three-Body" TV series, and delve into how cutting-edge technologies such as generative artificial intelligence are now being used in the film and television special effects industry.

The following are selected interviews

01 How "Operation Guzheng" was shockingly restored

"Silicon Valley 101": I believe that everyone has seen many of Director Bei's works. In addition to the TV series "The Three-Body Problem", Director Bei also serves as a visual director in well-known film and television works such as "The Legend of Shark Pearl" and "Silver Empire", the TV series "Red", "Guandong" and "The Legend of Yun Xi". He can be seen at the beginning of the program. Could you briefly share with us your professional experience in entering the visual directing industry?

Lu Beike: I entered this industry probably in 1999. My major is film and television advertising, and I started to use computers to do graphics around 1996. At that time, I used some industrial design software such as Autodesk and 3ds, which I found very interesting. That was also when Pixar cartoons such as "Toy Story" were becoming more popular, and I started to become interested in this aspect.

Because I am also studying this major, and I have been exposed to more and more content in this aspect in my specific work. One of the biggest changes was in 2004, when I was working in a post-production special effects and animation studio with some friends in Beijing. I came into contact with many American B-level film projects, and began to work for an American production company called Base. I worked as an OEM for animation and special effects. Later, we all set up a special effects company called Base FX in Beijing. In the past few years, I mainly worked as a director and screenwriter, and then I switched to specializing in special effects.

"Silicon Valley 101": Nowadays, we often see a lot of cool special effects in movies and TV series, but for non-practitioners, many people don't know what the specific work of film and television special effects is. Can you give us an example of the particularly shocking "Operation Guzheng" in the "Three-Body" TV series? Let's talk about how it was filmed, who the entire team is composed of, how to cooperate with the crew, and what is required in the post-production process. What kind of work can finally produce such an effect?

Lu Beike: My position is called visual director, which means I have to do a lot of picture design and some conceptual design from a visual perspective, including architectural design, industrial design, character image and many other types. To put it simply, as long as it is When it comes to content that is completed using post-processing computers, it all falls to the visual director. A large part of this content is called vfx special effects work. When it comes to the TV series "The Three-Body Problem", its special effects work can be roughly divided into two parts: one part is traditional real-shot special effects, and the other part is purely computer-generated animation. style special effects. Under the cartoon-style special effects, it is divided into two large parts. One is the simulation effect, which uses cartoon methods to simulate the screen of a relatively high-precision VR game.The other area is called artistic animation, which is animation created based on some scientific and technological principles, such as the artistic animation of the planet and its operating principles in "Planet Earth" produced by BBC, and the turkey farmer shooter hypothesis in "Three Body", etc. Some artistic animations with a strong style.

"Operation Guzheng" is a very traditional vfx special effect. Its special effects are characterized by a large amount of content that needs to be shot in the early stage, not just computer special effects in the later stage. The work of vfx is not purely done by computers. Many things such as physical special effects and prop models are completed by real shots. For example, boats, rice, shelves, etc. may require the use of physical props, as well as related methods of using special props. Make some device that is real and not implemented by a computer. In "Operation Guzheng", there are a large number of rivers, hidden camps and the like, all of which were shot in real life. But with the current development of computer graphics, many things that used to be done with real models can now be done with computers.

"Silicon Valley 101": For the Panama Canal in "The Three-Body Problem", you actually found a similar river in China for real-life shooting, and then used post-production to make it look like the Panama Canal, right?

Lu Beike: Strictly speaking, we did not find a river similar to it, but we found a terrain. The characteristics of that terrain were found by referring to the description of the Panama Canal in the novel. The Panama Canal is very long, part lake and part dam, and it has a very narrow place called the Gaillard Channel. We studied the real landforms of the Gaillard Channel and couldn't find a place in China that was exactly the same as the one described in the book, but we could find a lot of mountains that looked similar. One side was relatively high, and the other side was low. It is relatively low, not all on both sides of the river are like cliffs, and part of it is an artificial environment. We found a place in Zhejiang that was very similar to this landform, but there was no river in this place, so the river was created purely with CG (special effects).

Then in some partial pictures, we found another place that looked like there was water, which was the place where the two poles were erected in "Operation Guzheng". When we took the photo, we actually only took a photo of one pole on one side. That place is a natural place with water, but it is not a real river, but an arc-shaped terrain. But this place, the location of our ship collision, the location of the camp, etc. were actually designed and shot separately at six or seven different locations, and then spliced ​​together, but the final film will look like it It was filmed in the same place.

Image source @通义千文文 | Silicon Valley 101 You may still remember the shocking 'Operation Guzheng' in the 'Three-Body' TV series. The production team almost perfectly restored the nano flying blade cutting the giant ship on Judgment Day in the original work. plot. So, how was this  - Lujuba

The design of "Operation Guzheng", the picture comes from Lu Beike's Zhihu

"Silicon Valley 101": looks really natural. When I watched the TV series before, I thought the crew had really been stationed on the Panama Canal for several months.

Lu Beike: did originally have this plan, but we filmed it in the summer of 2020, and there was no way to go out because of the epidemic.

"Silicon Valley 101": Is the giant ship in "Operation Guzheng" real?

Lu Beike: Part of the ship is real. The close-up shots of the people on the ship in the play, and most of the ETO (earth-trisolaris organization, short for Earth Trisolaris organization) personnel on the deck of the ship are real shots. But most of the scenes of the boat running on the river and being cut at the end are all CG. Regarding the cutting part, we also built a real scene after we finished directing, but the real scene was only 1/20 of the entire environment, because after filming the whole ship spread out like playing cards described in the novel, we needed to shoot The plot of people going inside the ship to retrieve the hard drive must be completed with real scenes.

Image source @通义千文文 | Silicon Valley 101 You may still remember the shocking 'Operation Guzheng' in the 'Three-Body' TV series. The production team almost perfectly restored the nano flying blade cutting the giant ship on Judgment Day in the original work. plot. So, how was this  - Lujuba

Image source @通义千文文 | Silicon Valley 101 You may still remember the shocking 'Operation Guzheng' in the 'Three-Body' TV series. The production team almost perfectly restored the nano flying blade cutting the giant ship on Judgment Day in the original work. plot. So, how was this  - Lujuba

The picture comes from Lu Beike's Zhihu

"Silicon Valley 101": sounds like a huge project, so how many people are needed to complete such a shooting job?

Lu Beike: There are hundreds of people in the actual shooting part of , and they are divided into many different types of work. Some people are responsible for building the real shooting part, some are making props, and some are responsible for photography and service. From the perspective of post-production CG special effects personnel, first you need to design the picture in your mind, and then you need to do the storyboard. After completion, you need to do some pre-shooting preview. It means that the shots must be divided and animated first, and then pre-edited based on the rhythm and duration. In fact, we have seen it countless times before the final edit is broadcast in 2023. It was basically finalized in the summer of 2020. This part of the work is called previz or layout. Next is the actual shooting. The entire set-up team will use previz as a model to shoot various shots. Each shot is not necessarily shot in the same place. In addition to aerial shots, etc., there will be about four to five hundred people in front and behind. Be involved. After all the early materials are shot, there is another process called scanning, which is to use aerial photography machines or ground equipment such as radar scanning to truly capture the environment and get it back as assets. Later, the CG company will restore it based on the assets. Related animation binding, material modeling and other work.

In the "Guzheng Action" part of Three-Body, there is also a large part called dynamic simulation. requires some physical dynamics simulation for effects such as steel fragments falling to the ground and a ship hitting the soil. This kind of simulation cannot be adjusted manually. The animated actions of the ship, which we call keyframe animation, can be adjusted manually. , but there is no way to simulate this physics. For example, if there are tens of millions of pieces of paper flying in the sky, it is impossible to adjust them by hand. In this process, we will use some related software, such as Houdini, which is very good for settlement and physics simulation. Sometimes we have to make some plug-ins ourselves to process some things.

When we simulate, the picture looks gray. Generally we call it gray model. It may not have as good light and color as the final picture. This is because we must determine whether the simulation is successful at an early stage. We cannot adjust all the lights and then look at it. It may be too late because there is not enough computing power to render it again and again. After confirms the picture, then do the test rendering. Generally speaking, the test rendering of movies can produce dynamic pictures with small resolutions, but TV dramas generally do not have this budget, so they rely heavily on the director’s experience and need to determine this through a very small number of single-frame pictures. Whether the screen can be rendered in large batches. After

has determined these parts of the light, it starts to request rendering. This process is a computing power process, and some renderings rely on GPUs, but maybe 70% of "Operation Guzheng" still uses CPU calculations, and some renderers are used to present the final light effect. After these steps are completed, compositing is done, because the rendering is not done all at once, but is done layer by layer. For example, there is a diffuse light layer, which does not seem to have that strong light, there are also some conditional layers, and an occ light-proof layer, etc. When these layers are finally merged into a complete picture, compositing software is used. Finally, in In the compositing software, you can combine it into a final shot and watch it as a real sequence film.

Image source @通义千文文 | Silicon Valley 101 You may still remember the shocking 'Operation Guzheng' in the 'Three-Body' TV series. The production team almost perfectly restored the nano flying blade cutting the giant ship on Judgment Day in the original work. plot. So, how was this  - Lujuba

Simulation storyboard of hull segmentation, the picture comes from the Internet

"Silicon Valley 101": The process is really very complicated. So in a movie or a TV series, how much of the budget is usually allocated to post-production special effects?

Lu Beike: is a very non-standardized thing, and it is difficult to have fixed data. Generally speaking, if it is a special effects-heavy project like "Avengers", it is likely to account for about 50%, because the amount of special effects is very large and involves a lot of fantasy and science fiction scenes. But if it is just a regular urban drama or romance film, it may only be 10%, 5% or even less than 5%, depending on actual demand.

For "The Three-Body Problem", the current first season, which is the first part of the novel, is actually not bad. Our actual production budget is much lower than the Netflix version, probably only 1/10 or 1/20 of it. This is actually a difficulty in our domestic animation special effects production. The difficulty lies not in purely technical difficulties, but in balance, that is, how to produce excellent images with a small budget.

02 ai changes film and television special effects

"Silicon Valley 101": You once said in an interview after filming "The Three-Body Problem" that you were grateful for the progress of digital technology. If it were 10 years ago, it might be difficult to do what it is now with the current cost. Effect. I’m curious about the specific advancements in digital technology that allow a work like “The Three-Body Problem” to be filmed in China now?

Lu Beike: In fact, the entire progress of is continuous, but there is no disruptive innovation similar to chatgpt at the end of 22 and early 23.

In the special effects industry, changes in cost are reflected in the maturity of software, which reduces the cost of using these software. For example, 10 years ago, engine-based work was rarely used in the workflow. was like the engine of ue (unreal engine). However, when "Three-Body" was produced, a large number of previz were used in the preliminary work. For engine work, a team of 8 to 10 people may have been required to do the work before, but now it can be reduced to 2 or 3 people. In addition, in , many departments will have rendering optimization, which requires the advancement of rendering algorithms to save CPU computing power, but this progress is step-by-step and not so disruptive. Another point of

is that the ease of use of the technology has been greatly improved. In the past, there were a lot of things that needed to be written and checked by ourselves, such as using python to compile a lot of things. However, in the past decade or so, many problems have been solved technically, and there are many ready-made solutions that do not require us to start from graphics and images. I looked for methods in the study paper. Therefore, overall production costs have dropped significantly. Such effects might have been possible more than ten years ago, but they were very expensive. TV dramas could not afford this effect.

"Silicon Valley 101": Will the development of chips help budget optimization? Why didn't GPUs be used when rendering the effects of "Three-Body"?

Lu Beike: Because rendering in mainly involves the renderer, the algorithm of the renderer determines that some things are not like distributed rendering. The algorithm of the GPU determines that it calculates more simple things on different channels, but its ray tracing algorithm or some specific OCC algorithms are originally compiled on the CPU calculation, so its renderer is What is supported is CPU computing, and it does not perform such large distributed simple operations. One of the great features of gpu is that it has a lot of threads, but the calculation content of each thread is actually very simple. , however, has many renderers that require a lot of calculations and are not edited and developed based on this method, so we have to inherit this solution. But now it is true that there are more and more GPU computing methods. In fact, about 30% of "Three-Body" is calculated by GPU, and 70% is rendered by traditional CPU computing.

"Silicon Valley 101": In the current special effects industry, which company's CPU and GPU do you think is better to use?

Lu Beike: Currently, CPUs are basically from Intel, AMD also has some, and few from other companies. But no one uses brand as a criterion for judgment, because this rendering model determines that as long as you use the same basic algorithm to distribute the computing power, it can be allocated to any one.

"Silicon Valley 101": What impact will the arrival of this wave of generative AI technology in have on the entire special effects industry?

Lu Beike: The biggest impact of at present is on the pre-production and synthesis parts, and some of it is on dynamic special effects. Because dynamic special effects involve some programming work, it is still very laborious to do it manually. This kind of There are actually very few special effects artists who can both program well and understand the visual effects. But now chatgpt's programming capabilities do make us feel that it has been greatly improved.

The improvement in the early part is mainly in the conceptual design. Conceptual design requires a lot of divergent ideas. In the past, we mainly used hand-drawing for such divergent ideas. It required a lot of resources and time to turn a simple composition or line drawing into a complete one. , but now with the blessing of AI, the completion process has become particularly fast, especially for conceptual design and art painting personnel who are not original settings, the total demand is only about 20% of the original. However, the number of creative people, that is, the number of people who give AI goals and prompt words to use, will not decrease. This part is difficult for AI to directly replace.

Also, the production volume of finished draft effect test images on light may be increased by hundreds of times. In the past, a science fiction project might have 500-1,000 pictures in the early stages, but now it is entirely possible to increase it to 10,000 pictures. In this way, you will see a lot of different things, and the director will have more choices, but the other angle may be more difficult to choose because there will be a lot of pictures.

"Silicon Valley 101": Has and AI generative video related technologies begun to have an impact on this industry?

Lu Beike: is available, and this technology has been used in recent films. In fact, we have used products from companies such as gen-2 and runway. Currently, one of their characteristics is that they can create pictures very quickly. They are very suitable for situations where there is no particularly specific goal and a background can be placed casually, such as Sometimes during the filming of a film, a scene will appear on TV. In the past, due to copyright considerations, you couldn't use other people's works. We really had to spend time filming or making one. But now with the help of AI, a lot of this type of work has been saved.

Generally speaking, if what you need is not in the specific plot or logic, you can use AI to help you do it. What it is particularly weak at the moment is logical and conceptual coherence. This does not refer to whether the picture itself or the faces are coherent and aligned, but mainly because the absolute logic between the previous shot and the next shot is relatively poor. In , we often see that the trailer (trailer) they make looks better. That is because the trailer often does not require you to have such strong logic, and its slow motion effect is better than normal motion because of the characteristics of slow motion. In fact, the middle of the picture is dominated by poses. Someone has a pose. It is actually a micro-dynamic in a competitive state. But if it is a constant speed, for example, the hand movement I am talking here is logical, it is actually very difficult to completely simulate it in a 3D environment. This should be a generative AI algorithm. A problem that is difficult to solve in the short term.

Image source @通义千文文 | Silicon Valley 101 You may still remember the shocking 'Operation Guzheng' in the 'Three-Body' TV series. The production team almost perfectly restored the nano flying blade cutting the giant ship on Judgment Day in the original work. plot. So, how was this  - Lujuba

The picture of the hand previously generated by AI, the picture comes from the Internet

Our main solution now is to use control net for the two-dimensional picture. That is, you first frame the picture with your own composition. The composition itself is made by you. It's a 3D file, or a 3D file with clear outlines. You just have to let it constantly relight and re-render it for you. AI can complete it very quickly.

I read a very good book at the beginning of 2023. Stephen Wolfram, the author of Mathematica software, wrote about how GPT works. It talked about the current AI generative algorithm, which is the most What we are good at solving are those reducible parts. Human language is actually a regular thing. Human language can never be directly equivalent to a phenomenon. For example, a mineral water bottle. In fact, you cannot describe it 100% with language. You have to To truly describe it, you must measure it bit by bit and specify its specific reflectivity, refraction of light, reflectivity, and transparency. Only with these things can you fully describe a bottle, so this part This is not what current generative AI is good at. Because this part of is irreducible, its calculation itself is difficult to solve by algorithm. It is a bit like the difference between some procedural calculation methods and Monte Carlo-style algorithms.The Monte Carlo algorithm means that you have to measure this thing. You cannot simply solve it with a formula. It cannot be solved. When you use AI to make a picture, you may find that it is easy to use AI to calculate things that remember very regular things, such as dynamic clouds or non-dynamic clouds. AI calculation It is very easy to learn, and it is easy to deal with this kind of thing, because people's impression of this thing is very vague. People's impression of clouds in the sky is a concept, and there is not a single cloud that I have ever remembered. How the points are arranged, some meteorologists may feel that many clouds calculated by AI are unreasonable, but ordinary people will never see it. This is your understanding of this thing, which is conceptual and very simplified.

But if this is something that people are already good at, for example, we often find that there is a problem with the hands generated by AI, because the movement of the hand is full of logic, and it is not easy to reduce what position it is in. It is both It has a logical relationship and is constrained by the arrangement in the three-dimensional direction. It is not something that can move in any direction. It is both constraining and logical, but there is no way to absolutely find its regularity. This kind of It is very difficult to make things with AI.

So the more we understand this thing, the more we know where we can use these current generative AI graphics algorithms, and where it doesn’t make much sense to use them. For example, shadows, it is simpler to use traditional methods to generate shadows. It is very difficult to use AI algorithms. It will often be inaccurate and will only look like a shadow, but you will see it if you look carefully. I found that the shadow was wrong. Because it belongs to the plane stage now, there is no information in the z direction, that is, the depth direction, which will involve a lot of computing power. Now it is done in a plane, countable two-dimensional direction. Possibility of reducibility.

"Silicon Valley 101": Regarding , do you use more developed software applications such as gen-2? Or is it possible to use a model such as stable diffusion to do some development on your own?

Lu Beike: Use both . sd is used more often, and midjourney is also commonly used. Because stable diffusion can support control net well. As long as there is a suitable model, or you have made the relevant suitable model yourself, it will obviously reduce the cost in many developments. For example, I just mentioned that it used to require 10 people, but now it may only require 3 people. SD will obviously help the early development and conceptual design. For example, in the sketching stage, you can ask it to generate some starry sky or a certain style of building. As long as the lora or checkpoint model in your hand has this information and data, It's quite easy to restore it. On this basis, you can use the designer's ability to fill in the places where it made mistakes. This will definitely be much faster than starting from scratch.

"Silicon Valley 101" : So currently generative AI is already helping the film and television special effects industry to reduce costs and increase efficiency? If you quantify it, how much do you think it can help you speed up your progress or reduce your budget?

Lu Beike: The conceptual design of may be more than 30%-40%, and the normal special effects work is currently about 5%. currently has a threshold, that is, 3D models generated by AI are not currently available. Even if there is such a thing, I think Orio has also begun to have such development, but it is still far away from real applications in film and television dramas. Therefore, AI currently has a stronger effect of reducing costs and increasing efficiency in the early stages, but does not have a particularly large impact in the middle and final synthesis stages.

"Silicon Valley 101": What is the shortcomings of in the middle and late stages? How can we reach a commercially usable state?

Lu Beike: It’s mainly about accuracy, not logically accurate enough. For example, the work of using AI to cut out images in synthesis has been greatly improved, but if you want it to correctly change certain lights, it is still basically unavailable at present.There are also parts that require consistent processing based on your aesthetic awareness. Currently, it cannot be taught. For example, the simplest color adjustment. In most cases, the picture of the movie requires its light to have a certain degree of subjectivity. Intention, we call it expressionistic lighting. This kind of adjustment depends on your understanding of the story and the atmosphere of the characters. It is difficult for AI to solve this thing. It really requires a little adjustment by humans. That’s it. It is difficult to reduce, and there is no absolute regularity. Everyone pursues innovation in consciousness and aesthetics, rather than pursuing the same thing as others.

"Silicon Valley 101": In what other areas do you hope generative AI will develop next in , so that it can better help the special effects industry?

Lu Beike: I feel that the development of the current world model should be a good direction next, because one of the biggest core features of the world model is actually to allow AI to truly recognize those irreducible things, and to understand Where are the boundaries to oneself.

Today's AI has a feeling that it lacks actual self-boundaries, which is called self-awareness for people. Think about a person. He doesn’t know where his strengths and weaknesses are. He doesn’t know what he doesn’t know. AI is actually very scary. In fact, it is like this now. It will try hard to do whatever you ask it to do, but in fact, it treats it as When the world has a true understanding, it will know by itself what things it is not necessarily good at handling. If you deal with a normal person, he will tell you what my major is and what I am good at, right? So there is no need to insist that AI must know everything. It is currently just a model based on algorithms that will definitely be compiled down. But if this model is added with this element of self-awareness, it will feel where the boundaries are in real work, because when we do anything Everything has a framework, and this is what the real world is like.

The framework of the real world comes from a large number of physical facts and emotional facts of interpersonal relationships, as well as a kind of logic when the world operates. If you only understand the world from the perspective of language, you will not be able to discover the real framework of the world. I think that at present, gpt 4 is still a language model and has not yet reached the level of an open world model. But this is actually not my major. I am a director, and should be said to be an artistic creative person. It is because I have experienced these changes in real work that I have a personal experience. In this part, it is not from theory to theory, but a state of feedback from practice.

03 The evolution of the special effects industry and the impact of vision pro

"Silicon Valley 101": The film and television industry began to slowly unlock some CG special effects technologies about 100 years ago. Can Director Bei help us look back a little bit at some milestone films or events in the development of the film and television special effects industry?

Lu Beike: In the world, there have been manual special effects since the early Méliès era, including the 1927 German film "Metropolis". Those science fiction environments and fantasy environments were all painted on glass at that time. Capture it with a camera. Later, a landmark work was "King Kong", in which frame-by-frame physical animation appeared. "Star Wars Part 1", which many of us are deeply impressed by, is not actually a landmark movie in the development of computer graphics. Its space restoration technology is not the same as the production plan of 1968's "Space Odyssey" Very, very close. But at that time, some of Pixel's cartoons produced a lot of CG creatures that were combined with physical creatures, which was of great contemporary significance. In Cameron's "The Abyss", a transparent object calculated by dynamics was used for the first time. In "Terminator 2", this thing was developed into a liquid metal man. Why is this iconic thing? It brings about a result, that is, there is a kind of creative thinking that must be rooted in digital graphics and iconography.

Image source @通义千文文 | Silicon Valley 101 You may still remember the shocking 'Operation Guzheng' in the 'Three-Body' TV series. The production team almost perfectly restored the nano flying blade cutting the giant ship on Judgment Day in the original work. plot. So, how was this  - Lujuba

The picture comes from "Terminator 2"

You may use physical special effects to shoot spaceships, lightsabers, and monsters, but the liquid metal man and the face made of seawater in the abyss cannot be shot with physical special effects. It is impossible to complete, so it brings a watershed in creation. This watershed means that computer animation has developed a certain independence and opened up a completely new and other special effects in terms of creative needs. The road to the road where the scheme cannot be completed. For example, in "Three-Body", the radar and wind can be completely produced using traditional special effects, but the entire 3D ship is cut and broken into so many small strips that traditional special effects are impossible to produce.

is called a landmark work because its appearance opened up a new path. There is another big milestone after , that is, although you know that the picture itself is fake, it is difficult to judge whether it was made by CG by just looking at the picture. For example, when we watch "Jurassic Park" now, we can clearly see which dinosaurs are made of CG and which dinosaurs are 2D models. However, by the beginning of the 21st century, the special effects of some movies have surpassed the level of realism. Appear, making it difficult for the audience to distinguish, such as "Avatar". Before "Avatar", for example, you could still feel that Gollum in "Lord of the Rings" was made of 3D. But in "Avatar" there are parts where they are real people, and you can't tell from the pure CG look. There is also the famous "Rejuvenation", which has a very high level of facial restoration. It is equivalent to a milestone in special effects makeup. When the audience watches the movie, he is a real person, not like the Lord of the Rings or Avatar. A self-consistent thing in a special fantasy environment or technological environment. At this time, the audience is very sensitive, and you can tell if it doesn't look like a real person at all, so the difficulty is very high.

"Silicon Valley 101" : Although the technology is becoming more and more mature, we also found that the current special effects production involves a large number of steps. The 20-minute presentation of the three-body "Operation Guzheng" also took a long time to shoot. At present, there are still a lot of things that CG cannot do and need to be shot in real life. So what kind of development do you think the entire special effects industry will have in the future? Could it be that we need to shoot less and less things? Ten years from now, will "Operation Guzheng" be made using pure CG?

Lu Beike: is completely possible. The trend towards purely digital production is inevitable and undisputed. The entire media of , such as Apple's new Vision Pro, has already begun to convert media. The so-called conversion media actually means that immersive media will become more and more common. After these media come out, the traditional two-dimensional shooting-style image collection solution should gradually be replaced by pure 3D production image solutions. Because is a natural immersive medium, it is more in line with purely digital production methods. Some things cannot be photographed.

For example, if Vision Pro is to be used in an MR environment, you have put on your glasses and done the spatial calculations. Now a person is walking around on your desk. He and another person are singing here. On the other side There is another person dancing. How was this scene shot with a camera? There is no way to shoot it. If you want to complete this application, you must only have CG characters, CG characters, and CG environments. You have to calculate the environment based on space to restore the perspective of the table. This is called reverse. Tracking, these technologies are all CG technologies, and they cannot be completed in this plan of real shooting. This is a difference in principle, and there is no other possible solution.

"Silicon Valley 101": How long do you think it will be until that day of ?

Lu Beike: is very fast. Cartoons should be able to do it now, but real people are still some way off. There is also the issue of the playback environment. Currently, Vision Pro may be able to do it, but it does not mean that you can do it with cheaper VR glasses.Because the computing power of different devices is different, the presentation of the picture is settled in real time and requires engine support. If it is not settled in real time, you do not need to restore the spatial calculation place. It is equivalent to sitting here and just watching a program that has been compiled. It's just a piece of content or a recorded piece of content, which should be available now.

[Related supplementary information]

cg: the English abbreviation of computer graphics (computer graphics), CG special effects are illusions created by computers. When traditional special effects methods cannot meet the requirements of the film, CG special effects are needed. CG special effects can achieve almost all effects that humans can imagine. Mainly divided into two types: three-dimensional special effects and synthetic special effects. Stephen Wolfram: A famous British scientist in computer science, mathematics, and theoretical physics. As a programmer, he is one of the inventors of the mathematics software Mathematica; he is also known for his work on the computational knowledge engine Wolfram Alpha; as a businessman, he is the founder and CEO of Wolfram Research official. He published the book "what is chatgpt doing... and why does it work?" in March 2023. Monte Carlo algorithm: , also known as statistical simulation method, is a numerical calculation method guided by probability and statistics theory proposed in the mid-1940s due to the development of science and technology and the invention of electronic computers. Refers to a method of using random numbers (or more commonly pseudo-random numbers) to solve many computational problems. . The main working principle is continuous sampling and gradual approximation. runway: is an American image and video AI editing software provider that provides designers, artists and developers with a series of tools and platforms to help them create works using artificial intelligence technology. Gen-2 is a multi-modal artificial intelligence system released by it that can generate videos based on text, pictures or video clips. stable diffusion: is a text-to-image and image-to-image generation model based on latent diffusion models, which can generate high-quality, high-resolution, and high-fidelity images based on any text or image input.

Tags: entertainment