Fueling Creativity in the Digital World SIGGRAPH 2023 Technical Papers research showcases innovation in further advancing the field of character animation

CHICAGO, July 26, 2023 — (PRNewswire) — Since its inception 50 years ago,  SIGGRAPH has served as an epicenter for inventive ideas and innovative research in the ever-evolving field of computer graphics and interactive techniques. These innovations have given us realistic CGI, or computer-generated images, that helped blast traditional filmmaking and gaming into a new era.

This year, computer scientists, artists, developers, and industry experts around the world will convene 6–10 August in Los Angeles for SIGGRAPH 2023. Fittingly, the theme behind SIGGRAPH this year, notes Conference Chair Erik Brunvand, is to recognize 2023 as the "Age of SIGGRAPH," honoring the full chronology of the industry, the community, and the organization — then, now, and far into the future.

One example of CGI innovation is the ongoing advances in character animation. Many of these advances are driven by motion capture technology that consistently delivers efficient, state-of-the-art visual effects prevalent in our beloved superhero blockbusters.

The creative potential in the field of character animation research remains endless.

"Character animation represents a truly unique field within computer graphics. The goal of character animation is to replicate the intelligence and behavior of living beings, and this extends not only to humans and animals, but also to imaginary creatures," says Libin Liu, assistant professor at Peking University who will be presenting new research, along with his team, as part of the SIGGRAPH 2023 Technical Papers program.

"Over the years, the research community has explored many approaches toward achieving this goal … and with the exciting progress we've seen in AI, there will be a boom in research that utilizes large language models or more comprehensive multi-modal models, as the 'brain' for the character, coupled with the development of new motion representation and generation frameworks to translate the 'thoughts' of this brain into realistic actions," says Liu. "It's an exciting time for all of us in this field."

As a preview of the popular Technical Papers program, here is a spotlight of three unique approaches that showcase innovation in advancing the character animation field even further.

Body Language

Many of us unconsciously converse or express ourselves using physical gestures. Some of us may gesture with our hands, shift our body posture to make a point, or put into action another body part (eyes or legs) while we talk. Indeed, speech and communication go hand-in-hand with physical gesturing — a complicated sequence to represent digitally.

A team of researchers from Peking University-China and National Key Lab of General AI have introduced a sophisticated computational framework that captures the detailed nuances of physical human speech gestures. And this framework does so while allowing users to control those details using a broad range of input data, including a piece of text description, a short clip of demonstration, or even data representing animal gestures such as video of a bird flapping or spreading its wings.

The key component underpinning the team's new system is a novel representation of motions, specifically the quantized latent motion embeddings, coupled with diffusion models — one of the key components behind recent AI-driven image generation techniques. This representation significantly reduces ambiguity and ensures the naturalness and diversity of movements. Additionally, they enhanced the CLIP model developed by OpenAI with the ability to interpret style descriptions in multiple forms and have developed an efficient technique to analyze sentences, enabling the digital character to understand the speech's semantics and determine the optimal time to gesture.

Building on the advances made in the digital human space, this new work addresses the challenge of producing digital characters that have the capability to perform physical gestures during conversations, and with minimal direction or instruction. The system supports style prompts in the form of short texts, motion sequences, or video clips and provides body part-specific style control; for instance, combining the gesture of a yoga pose (warrior one) with gestures of feeling "happy" or "sad."

"With this work, we've moved another step closer to making digital humans behave like their real-life counterparts. Our system equips these virtual characters with the capability to perform natural and diversified gestures during conversations, thereby considerably enhancing the realism and immersion of interactions," says Liu, a lead author of the research and assistant professor at Peking University's School of Intelligence Science and Technology.

"Perhaps the most exciting aspect of this technology is its ability to let users intuitively control the character's motion using language and demonstrations. This also allows the system to interface seamlessly with advanced artificial intelligence like ChatGPT, bringing an increased level of intelligence and lifelikeness to our digital characters."

Liu and his collaborators, Tenglong Ao and Zeyi Zhang, both at Peking University, are set to demonstrate their new work at SIGGRAPH 2023. View the team's paper and accompanying video on their project page.

Realistic Robots in Motion

Who doesn't love a dancing robot? But how to easily replicate or simulate legged robots and their dynamic motions remains a challenge in the field of character animation. In new research, an international team of researchers from Disney Research Imagineering and ETH Zürich describe an innovative technique that enables the optimal retargeting of expressive physical motions onto freely walking robots.

Retargeting motion, or editing existing motions, either from motion capture data or other sources of digital artistic creations, is a quicker way to simulate physical motion in the digital world. Accounting for the significant differences in proportions, mass distributions, and number of degrees of freedom of the motion data makes editing them onto a different system most challenging.

To that end, this new technique enables the retargeting of captured or artist-provided motion onto legged robots of vastly different proportions and mass distributions.

"We can take an input motion, and then automatically solve for the best possible way that a robot can execute that motion," note the researchers. Their method takes into account the robot dynamics and also the robot's actuation limits, which means that even highly dynamic motions can be successfully retargeted. The result is that the robot can perform the motion without losing its balance — not an easy feat.

The latter is a major hurdle the team has overcome with this new approach. Due to the significant differences in sizes, shapes between animals, or artist-created rigs and a legged robot, the retargeting of motions is difficult to achieve with standard optimal control techniques and manual trial-and-error methods.

The researchers' approach is a differentiable optimal control (DOC) technique that allows them to solve for a comprehensive set of parameters to make the retargeting agnostic to changes in proportions, mass distributions, as well as differences in the number of degrees of freedom between the source of input motion and the actual physical robot.

The team behind DOC includes Ruben Grandia, Espen Knoop, Christian Schumacher, and Moritz Bächer at Disney Research and Farbod Farshidian and Marco Hutter at ETH Zürich. They will showcase their work as part of SIGGRAPH 2023 Technical Papers program. For the paper and video, visit the team's project page.

Tennis, Anyone?

The ultimate dream of computer gaming enthusiasts is to be able to control their players in the virtual world in a way that mirrors the players' athleticism and movement in the physical world. The authenticity of the game is what counts.

1 | 2  Next Page »
Featured Video
Latest Blog Posts
Sanjay GangalAECCafe Today
by Sanjay Gangal
AEC Industry Predictions for 2025 — vGIS
Sanjay GangalIndustry Predictions
by Sanjay Gangal
AEC Industry Predictions for 2025 — QeCAD
Jobs
Principal Engineer for Autodesk at San Francisco, California
Senior Principal Software Engineer for Autodesk at San Francisco, California
Machine Learning Engineer 3D Geometry/ Multi-Modal for Autodesk at San Francisco, California
Business Development Manager for Berntsen International, Inc. at Madison, Wisconsin
Upcoming Events
Consumer Electronics Show 2025 - CES 2025 at Las Vegas Convention Center Las Vegas NV - Jan 7 - 10, 2025
Commercial UAV Expo 2025 at Amsterdam Netherlands - Apr 8 - 10, 2025
Commercial UAV Expo 2025 at RAI Amsterdam Amsterdam Netherlands - Apr 8 - 11, 2025
Geospatial World Forum 2025 at Madrid Marriott Auditorium Madrid Spain - Apr 22 - 25, 2025



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise