YouTube recently introduced a new requirement for creators to disclose when their realistic content was made using AI. This move aims to protect viewers from mistakenly believing synthetically created videos are genuine. The platform’s new tool in Creator Studio mandates creators to reveal when content may be confused for a real person, location, or event produced by altered or synthetic media, including generative AI. Experts warn that AI poses a significant threat during elections, prompting this change. While the policy does not demand disclosure for obviously unrealistic or animated content, creators must identify videos utilizing realistic individuals’ likeness, such as digital alterations or synthetic voices. Enforcement measures for noncompliance are under consideration. Labels will gradually appear across all YouTube formats, beginning with the mobile app and later extending to desktops and smart TVs.
Robotics: The Latest News and Innovations in Artificial Intelligence
LLM, AI, Chatbot, GPT,1 bit llm