|
1934 | 1934 | monitoring: "After giving the community time to use the models and explore different applications, we collected feedback." |
1935 | 1935 | feedback: unknown |
1936 | 1936 |
|
| 1937 | +- type: model |
| 1938 | + name: Genie 2 |
| 1939 | + organization: Google DeepMind |
| 1940 | + description: Genie 2 is a foundation world model capable of generating an endless variety of action-controllable, playable 3D environments for training and evaluating embodied agents based on a single prompt image. |
| 1941 | + created_date: 2024-12-04 |
| 1942 | + url: https://deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/ |
| 1943 | + model_card: unknown |
| 1944 | + modality: |
| 1945 | + explanation: "the model is prompted with a single image generated by Imagen 3" |
| 1946 | + value: image; video |
| 1947 | + analysis: Unknown |
| 1948 | + size: Unknown |
| 1949 | + dependencies: [Imagen 3] |
| 1950 | + training_emissions: Unknown |
| 1951 | + training_time: Unknown |
| 1952 | + training_hardware: Unknown |
| 1953 | + quality_control: Responsible development is emphasized, developing our technologies responsibly and building towards more general AI systems that can safely carry out tasks. |
| 1954 | + access: closed |
| 1955 | + license: Unknown |
| 1956 | + intended_uses: Genie 2 can be used for generating diverse environments for training and evaluating AI agents, rapid prototyping interactive experiences, and experimenting with novel environments. |
| 1957 | + prohibited_uses: Unknown |
| 1958 | + monitoring: Unknown |
| 1959 | + feedback: Unknown |
| 1960 | +- type: model |
| 1961 | + name: Veo 2 |
| 1962 | + organization: Google DeepMind |
| 1963 | + description: Veo 2 is a state-of-the-art video generation model that creates videos |
| 1964 | + with realistic motion and high-quality output, up to 4K, with extensive camera |
| 1965 | + controls. It simulates real-world physics and offers advanced motion capabilities |
| 1966 | + with enhanced realism and fidelity. |
| 1967 | + created_date: 2024-12-16 |
| 1968 | + url: https://deepmind.google/technologies/veo/veo-2/ |
| 1969 | + model_card: unknown |
| 1970 | + modality: |
| 1971 | + explanation: Our state-of-the-art video generation model ... text-to-image model |
| 1972 | + Veo 2 |
| 1973 | + value: text; video |
| 1974 | + analysis: Veo 2 outperforms other leading video generation models, based on human |
| 1975 | + evaluations of its performance. |
| 1976 | + size: unknown |
| 1977 | + dependencies: [] |
| 1978 | + training_emissions: unknown |
| 1979 | + training_time: unknown |
| 1980 | + training_hardware: unknown |
| 1981 | + quality_control: Veo 2 includes features that enhance realism, fidelity, detail, |
| 1982 | + and artifact reduction to ensure high-quality output. |
| 1983 | + access: limited |
| 1984 | + license: unknown |
| 1985 | + intended_uses: Creating high-quality videos with realistic motion, different styles, |
| 1986 | + camera controls, shot styles, angles, and movements. |
| 1987 | + prohibited_uses: unknown |
| 1988 | + monitoring: unknown |
| 1989 | + feedback: unknown |
| 1990 | + |
| 1991 | +- type: model |
| 1992 | + name: Gemini 2.0 |
| 1993 | + organization: Google DeepMind |
| 1994 | + description: Google DeepMind introduces Gemini 2.0, a new AI model designed for |
| 1995 | + the 'agentic era.' |
| 1996 | + created_date: 2024-12-11 |
| 1997 | + url: https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#ceo-message |
| 1998 | + model_card: unknown |
| 1999 | + modality: |
| 2000 | + explanation: The first model built to be natively multimodal, Gemini 1.0 and |
| 2001 | + 1.5 drove big advances with multimodality and long context to understand information |
| 2002 | + across text, video, images, audio and code... |
| 2003 | + value: text, video, image, audio; image, text |
| 2004 | + analysis: unknown |
| 2005 | + size: unknown |
| 2006 | + dependencies: [] |
| 2007 | + training_emissions: unknown |
| 2008 | + training_time: unknown |
| 2009 | + training_hardware: |
| 2010 | + explanation: It’s built on custom hardware like Trillium, our sixth-generation |
| 2011 | + TPUs. |
| 2012 | + value: custom hardware like Trillium, our sixth-generation TPUs |
| 2013 | + quality_control: Google is committed to building AI responsibly, with safety and |
| 2014 | + security as key priorities. |
| 2015 | + access: |
| 2016 | + explanation: Gemini 2.0 Flash is available to developers and trusted testers, |
| 2017 | + with wider availability planned for early next year. |
| 2018 | + value: limited |
| 2019 | + license: unknown |
| 2020 | + intended_uses: Develop more agentic models, meaning they can understand more about |
| 2021 | + the world around you, think multiple steps ahead, and take action on your behalf, |
| 2022 | + with your supervision. |
| 2023 | + prohibited_uses: unknown |
| 2024 | + monitoring: unknown |
| 2025 | + feedback: unknown |
| 2026 | + |
0 commit comments