10 Essential Text to Video Diffusion Basics for Developers

Table of Contents
    [background image] image of a work desk with a laptop and documents (for a ai legal tech company)
    Prodia Team
    February 24, 2026
    No items found.

    Key Highlights:

    • Prodia is an innovative API platform for high-performance media creation with a latency of 190ms.
    • It simplifies the integration of media generation tools into existing tech stacks, supporting tasks like image-to-text and inpainting.
    • The market for media generation APIs is expected to grow significantly by 2026, positioning Prodia as a leader.
    • Text-to-video diffusion models transform text into dynamic visual content, enhancing narrative coherence.
    • These models will evolve to allow real-time interaction and personalization by 2026.
    • Key components of text-to-video models include text encoders, diffusion processes, and temporal consistency mechanisms.
    • Common features include high-quality output, real-time processing, user-friendly interfaces, and support for multiple formats.
    • Advantages of text-to-video models include enhanced creativity, cost efficiency, scalability, and rapid prototyping.
    • Limitations include computational intensity, quality variability, temporal coherence issues, and dependency on training data.
    • Ethical considerations involve content authenticity, bias in outputs, user consent, and transparency.
    • Applications span marketing, education, healthcare, and entertainment, significantly impacting content creation.
    • Future challenges include improving quality, scalability, ethical governance, and integration with emerging technologies.

    Introduction

    Text-to-video diffusion technology is revolutionizing media creation, presenting developers with remarkable opportunities to turn textual narratives into vibrant visual experiences. By leveraging cutting-edge AI capabilities, these models streamline production processes and amplify creativity and engagement across diverse sectors, from marketing to education.

    However, as this groundbreaking technology gains momentum, pressing questions emerge: How can developers adeptly navigate the complexities of implementation while upholding ethical standards? Delving into the essentials of text-to-video diffusion models uncovers both the thrilling potential and the challenges that await creators in this dynamic digital landscape.

    Prodia: The High-Performance API for Media Generation

    Prodia stands out as an innovative API platform that empowers developers with high-performance media creation tools. With an impressive output latency of just 190ms, it enables rapid implementation of text to video diffusion basics, eliminating the complexities often associated with GPU setups.

    This developer-first approach ensures seamless integration into existing tech stacks, making Prodia the ideal choice for those eager to enhance their applications with advanced AI capabilities. Its suite of ultra-fast APIs supports various media creation tasks, including:

    • image to text
    • image to image
    • inpainting

    allowing developers to produce high-quality outputs efficiently.

    As the market for media generation APIs is projected to grow significantly by 2026, Prodia is well-positioned to lead this evolution. With unmatched performance, it provides developers the tools they need to stay ahead in a competitive landscape. Don't miss the opportunity to elevate your projects - integrate Prodia today.

    Understanding Text-to-Video Diffusion Models

    The text to video diffusion basics represent a significant leap forward in generative AI, transforming textual descriptions into dynamic visual content. These advanced systems work by gradually introducing noise to a video representation, enabling them to learn and produce coherent sequences that align with the narrative provided. By conditioning on text inputs, they utilize text to video diffusion basics to generate visually captivating content that meets user intent.

    As we look ahead to 2026, these models will evolve further, boasting enhanced capabilities for real-time interaction and personalization. Brands and creators will have the ability to produce content where dialogue, visuals, and pacing adjust dynamically based on audience data or real-time input. This shift marks a transition from static visual content creation to a more responsive and engaging medium.

    Current trends indicate that the text to video diffusion basics are improving not only in quality but also in their ability to synthesize complex narratives. With the rise of AI film creation, creators are leveraging these tools to generate customized material at scale, producing distinct clips tailored for various audience segments. This personalization is revolutionizing production, making it more accessible and efficient.

    Moreover, advancements in generative AI are paving the way for innovative storytelling techniques. By 2026, AI will enable the creation of immersive narratives through features like scene-aware soundscapes and emotionally adaptive music, further enriching the viewer's experience. The emergence of 'AI-native cinematography' will redefine visual storytelling, allowing for seamless camera movements and dynamic lighting that reflect emotional states. As these technologies continue to advance, they are set to reshape the boundaries of creative expression in visual creation.

    Key Components of Text-to-Video Models

    Text-to-video models are essential for creating high-quality visual content, and they rely on several key components:

    • Text Encoder: This crucial element converts textual input into a format the model can interpret, enabling the generation of visuals that accurately reflect the provided descriptions.
    • Diffusion Process: It gradually transforms random noise into coherent image frames, facilitating the creation of high-quality visuals through iterative refinement.
    • Temporal Consistency Mechanism: This ensures that the generated frames maintain logical flow and coherence over time, which is vital for producing seamless narratives.
    • Output Layer: It integrates all previous elements to produce the completed video output, ensuring a polished and cohesive viewing experience.

    Understanding these components is crucial for developers looking to customize or enhance their implementations. Each component plays a significant role in the overall effectiveness and quality of the generated content. Prodia supports this learning with user manuals and code snippets, guiding developers in setting up their projects in Node.js and Python. This enables rapid deployment and seamless integration of its high-performance media generation APIs.

    Take the next step in your development journey - explore Prodia's resources and elevate your projects today!

    Common Features of Text-to-Video Models

    Text-to-video models stand out due to several key features that significantly enhance their usability and effectiveness:

    • High-Quality Output: These models excel in generating visually appealing videos that accurately represent the input text, ensuring that the final product meets high standards of quality.
    • Real-Time Processing: With ultra-low latency, these tools can generate video material almost instantaneously. This capability is essential for applications requiring immediate feedback, such as live presentations or social media updates.
    • User-Friendly Interfaces: A significant advantage of modern text-to-video tools is their simplified integration processes. Developers can apply these frameworks without extensive technical skills, promoting a more inclusive atmosphere for creators.
    • Customization Options: Users benefit from the flexibility to adjust various parameters and settings, enabling them to tailor outputs to meet specific project requirements, whether for marketing, training, or entertainment.
    • Support for Multiple Formats: These systems can manage a range of input types, including scripts, keywords, and longer narratives, making them versatile tools for various creation needs.

    The blend of these features not only boosts user satisfaction but also establishes text to video diffusion basics as vital instruments in the evolving realm of media creation. Embrace these advancements and elevate your content creation process today!

    Advantages of Text-to-Video Diffusion Models

    Text-to-video diffusion models offer compelling advantages that are reshaping the video production landscape:

    • Enhanced Creativity: These models empower creators to swiftly visualize concepts, turning abstract ideas into captivating video content. As Harold Kwabena Fearon notes, the text to video diffusion basics mark a significant leap in creation, allowing developers to delve into storytelling and aesthetics. This results in innovative outputs that resonate with audiences.
    • Cost Efficiency: By incorporating text-to-motion technology, the need for extensive production resources is drastically reduced. This makes high-quality content creation accessible for smaller teams and startups. Traditional production costs can range from $6,300 to $29,250 for just one minute of finished material, highlighting how democratization in content creation enables organizations to produce engaging clips without the hefty price tag of conventional methods.
    • Scalability: Text-to-video systems are inherently adaptable, catering to projects of all sizes-from brief clips to comprehensive narratives-without requiring major workflow changes. This flexibility is crucial for teams aiming to respond quickly to market demands, as understanding the text to video diffusion basics significantly cuts down the time and costs associated with traditional production.
    • Rapid Prototyping: These prototypes allow for swift iterations on visual content, enabling teams to gather feedback and make adjustments efficiently during the creative process. This agility not only accelerates production schedules-resulting in a 68% faster time-to-publish for marketing campaigns-but also enhances the overall quality of the final product, ensuring it meets audience expectations.

    Limitations of Text-to-Video Models

    Text-to-video models, while innovative, face several notable limitations that can impact their effectiveness:

    • Computational Intensity: The high resource demands of these models restrict access for smaller teams, making it challenging for them to leverage advanced video generation capabilities. This issue is compounded by the fact that 80% of workers using generative AI reported it has added to their workload and hampered productivity, highlighting the barriers smaller teams face in utilizing these technologies.

    • Quality Variability: The quality of outputs can fluctuate dramatically, influenced by the complexity of the input text and the training data utilized. Models trained on diverse and high-quality datasets tend to produce better results, while those with limited or biased data may yield subpar outputs. Metrics for assessing quality, such as naturalness and text similarity, further illustrate these quality fluctuations.

    • Temporal Coherence Issues: Achieving consistent motion and smooth transitions remains a significant hurdle. Many produced films exhibit awkward animations due to challenges in maintaining temporal coherence, which detracts from the overall viewing experience. Insights from case studies indicate that maintaining consistent visuals and realistic motion, especially in lengthier recordings, is a persistent challenge.

    • Dependency on Training Data: The performance of text-to-video systems is heavily contingent on the quality and diversity of their training datasets. Models that rely on large datasets scraped from the internet without proper consent may face ethical and legal challenges, complicating their deployment in commercial settings. This raises concerns about the responsible use of data and the potential for legal repercussions.

    These limitations emphasize the ongoing challenges in the domain of AI content creation. It underscores the need for continued advancements in technology and methodology to overcome these hurdles.

    Ethical Considerations in Text-to-Video Generation

    In the realm of text-to-video models, developers face critical ethical considerations that demand attention to ensure responsible media generation.

    • Content Authenticity is paramount. Generated videos must accurately represent information to avoid misleading viewers. Implementing robust verification processes is essential for maintaining the integrity of the material. The EU AI Act underscores this need, requiring operators of AI systems to clearly label deepfakes as artificially generated or manipulated. This highlights the importance of authenticity in media.

    • Next, we must address Bias in Outputs. Developers should actively confront biases present in training datasets, as these can skew representations in generated material. Utilizing diverse and representative datasets is a proactive approach to mitigate these biases, fostering equitable outcomes. For instance, studies reveal that hiring algorithms may favor certain genders or ethnicities, leading to unequal job opportunities. This emphasizes the necessity for fairness in AI-generated materials.

    • User Consent is another critical aspect. Obtaining explicit permissions when incorporating likenesses or personal data in video generation is essential. This practice not only respects individual rights but also enhances the ethical standing of the produced material. Organizations bear the ethical responsibility of implementing strong consent mechanisms to inform users about data usage, a point emphasized in recent discussions on AI ethics.

    • Finally, Transparency plays a vital role in maintaining audience trust. Clearly communicating the role of AI in content creation is essential. By being upfront about the use of AI technologies, developers can foster a more informed and engaged viewer base, ultimately contributing to a more ethical media landscape. As Bernard Marr notes, ethical AI is foundational for innovation and public trust, making transparency a critical component of responsible AI practices.

    Applications and Impacts of Text-to-Video Models

    Text-to-video models are revolutionizing various sectors with their diverse applications:

    • Marketing and Advertising: These models allow for the rapid generation of promotional videos from product descriptions, significantly enhancing audience engagement. Brands can respond swiftly to market trends, leveraging the global AI content generator market, which was valued at USD 415 million in 2022 and is projected to grow at a CAGR of 18.5%, reaching an estimated USD 2,172 million by 2032. This underscores the increasing reliance on AI for effective marketing strategies. Moreover, AI content generation empowers creators to customize scripts and visuals tailored to specific niches, crucial for boosting engagement.

    In education, the text to video diffusion basics transform traditional learning materials into dynamic visual content, catering to diverse learning styles. This innovative approach not only improves comprehension but also enhances educational outcomes by making complex topics more accessible. Educators can convert written lessons into captivating visual formats, which has been shown to increase student involvement and retention.

    • Healthcare: AI video creation is vital in healthcare, converting medical guidelines and health tips into engaging videos that improve patient comprehension and adherence. This application showcases the technology's versatility and its potential to enhance communication in critical areas.

    • Entertainment: The technology democratizes content creation, enabling creators to produce short films or animations from scripts. This lowers barriers for aspiring filmmakers and animators, allowing a broader range of voices and stories to emerge in the entertainment landscape.

    Text to video diffusion basics facilitate the quick creation of engaging content for social media platforms, significantly boosting user engagement and retention. As audiences increasingly favor short-form media, these tools help creators maintain a consistent posting schedule while adapting swiftly to trending topics. This ultimately enhances visibility and engagement on platforms like TikTok and Instagram. However, balancing automation with human authenticity is essential to ensure that the content resonates personally with audiences.

    Comparison of Leading Text-to-Video Models

    When comparing leading text-to-video models, it’s crucial to consider several key factors:

    • Output Quality: Evaluate the visual fidelity and coherence of generated videos. High-quality outputs are essential for maintaining viewer engagement and delivering a professional product.

    • Processing Speed: Assess latency and real-time capabilities. Fast processing ensures immediate content generation, which is vital in today’s fast-paced environment.

    • Customization Options: Look for models that allow parameter adjustments. Tailoring outputs to specific needs can significantly enhance the relevance and effectiveness of the content.

    • Integration Ease: Consider how seamlessly the system can fit into existing workflows and tech stacks. A smooth integration process minimizes disruption and maximizes productivity.

    • Cost: Analyze pricing structures carefully. Ensuring alignment with budget constraints is fundamental for sustainable implementation.

    By focusing on these factors, you can make an informed decision that enhances your content creation capabilities.

    Future Challenges and Scope of Text-to-Video Models

    The future of text-to-video models presents several significant challenges that demand our attention:

    • Improving Quality and Consistency: Ongoing research is essential to enhance the realism and coherence of generated videos. Without this, the potential of these models remains unfulfilled.

    • Scalability: We must create systems capable of efficiently managing larger datasets and more intricate narratives. This scalability is crucial to ensure performance does not falter as complexity increases.

    • Ethical Governance: Establishing robust frameworks is vital to ensure the responsible use of AI-generated content. We need to address concerns about misinformation and bias to build trust in these technologies.

    • Integration with Emerging Technologies: It's imperative to explore how text-to-video models can synergize with other AI advancements, such as augmented reality and interactive media. This integration can lead to truly immersive experiences that captivate users.

    By tackling these challenges head-on, we can unlock the full potential of text to video diffusion basics, paving the way for innovative applications that enhance our digital landscape.

    Conclusion

    The evolution of text to video diffusion models marks a significant shift in how developers create engaging visual content from textual inputs. By leveraging advanced AI technologies, these models unlock new avenues for creativity, efficiency, and personalization in media generation.

    Key insights throughout this article reveal the advantages of using text to video models, including enhanced creativity and cost efficiency. However, challenges such as quality variability and ethical considerations must also be addressed. The exploration of Prodia as a high-performance API for media generation highlights the necessity of robust tools that enable seamless integration and rapid deployment for developers eager to innovate in this space.

    As the media creation landscape evolves, embracing the fundamentals of text to video diffusion will empower developers to produce captivating content. This shift has the potential to reshape industries from marketing to education and entertainment. It is essential for creators to stay informed about these advancements, actively engage with emerging technologies, and prioritize ethical practices. By doing so, they can ensure that the future of video generation is both responsible and inclusive.

    Frequently Asked Questions

    What is Prodia and what does it offer?

    Prodia is a high-performance API platform designed for media generation, providing developers with tools for rapid media creation. It offers ultra-fast APIs for tasks such as image to text, image to image, and inpainting, enabling efficient production of high-quality outputs.

    How fast is the output latency of Prodia?

    Prodia has an impressive output latency of just 190ms, allowing for quick implementation of media generation tasks without the complexities of GPU setups.

    What are text to video diffusion basics?

    Text to video diffusion basics are advanced generative AI systems that transform textual descriptions into dynamic visual content by gradually introducing noise to a video representation, enabling the production of coherent sequences that align with the provided narrative.

    What advancements are expected in text to video diffusion models by 2026?

    By 2026, text to video diffusion models are expected to evolve with enhanced capabilities for real-time interaction and personalization, allowing brands and creators to produce content that dynamically adjusts based on audience data or real-time input.

    What are the key components of text-to-video models?

    The key components of text-to-video models include: - Text Encoder: Converts textual input into an interpretable format for the model. - Diffusion Process: Transforms random noise into coherent image frames through iterative refinement. - Temporal Consistency Mechanism: Ensures logical flow and coherence in generated frames over time. - Output Layer: Integrates all elements to produce a polished and cohesive video output.

    How does Prodia support developers in using its APIs?

    Prodia provides user manuals and code snippets to assist developers in setting up their projects in Node.js and Python, facilitating rapid deployment and seamless integration of its media generation APIs.

    List of Sources

    1. Prodia: The High-Performance API for Media Generation
    • Blog Prodia (https://blog.prodia.com/post/ai-performance-metrics-overview-key-insights-for-developers)
    • Blog Prodia (https://blog.prodia.com/post/10-best-ai-upscale-apps-for-developers-in-2026)
    • Blog Prodia (https://blog.prodia.com/post/10-open-source-ai-benchmark-projects-for-developers-in-2026)
    • Blog Prodia (https://blog.prodia.com/post/master-ai-generator-fill-image-best-practices-for-engineers)
    • Blog Prodia (https://blog.prodia.com/post/7-inference-ap-is-for-product-teams-to-accelerate-development)
    1. Understanding Text-to-Video Diffusion Models
    • 5 Bold Predictions for AI Video Generation in 2026 (https://higgsfield.ai/blog/top-5-predictions-for-ai-video-generation-in-2026)
    • The AI Video Revolution: How Text-to-Video Models Are Transforming Content Creation in 2026 (https://thedailynewsonline.com/the-ai-video-revolution-how-text-to-video-models-are-transforming-content-creation-in-2026/article_de58be2a-8f98-469a-b21d-06723eb20257.html)
    • The State of AI Video Creation 2026 | Vivideo (https://vivideo.ai/en/blog/state-of-ai-video-creation-2026)
    • Accelerating Diffusion Models with an Open, Plug-and-Play Offering | NVIDIA Technical Blog (https://developer.nvidia.com/blog/accelerating-diffusion-models-with-an-open-plug-and-play-offering)
    • How AI Is Changing Video Creation in 2026 (https://linkedin.com/pulse/how-ai-changing-video-creation-2026-ashish-dwivedi-ohvic)
    1. Key Components of Text-to-Video Models
    • 5 Bold Predictions for AI Video Generation in 2026 (https://higgsfield.ai/blog/top-5-predictions-for-ai-video-generation-in-2026)
    • The AI Video Revolution: How Text-to-Video Models Are Transforming Content Creation in 2026 (https://thedailynewsonline.com/the-ai-video-revolution-how-text-to-video-models-are-transforming-content-creation-in-2026/article_de58be2a-8f98-469a-b21d-06723eb20257.html)
    • AI Video Trends: AI Video Predictions For 2026 | LTX Studio (https://ltx.studio/blog/ai-video-trends)
    • The State of AI Video Creation 2026 | Vivideo (https://vivideo.ai/en/blog/state-of-ai-video-creation-2026)
    • Inside-Sora’s-Architecture. (https://medium.com/@frinktyler1445/inside-soras-architecture-e9abe429a49c)
    1. Common Features of Text-to-Video Models
    • The AI Video Revolution: How Text-to-Video Models Are Transforming Content Creation in 2026 (https://thedailynewsonline.com/the-ai-video-revolution-how-text-to-video-models-are-transforming-content-creation-in-2026/article_de58be2a-8f98-469a-b21d-06723eb20257.html)
    • 11 Powerful AI Video Tools for 2025 (https://atlassian.com/blog/loom/ai-video-tools)
    • What is Seedance? The Chinese AI app sending Hollywood into a panic (https://bbc.com/news/articles/ckg1dl410q9o)
    • The State of AI Video Creation 2026 | Vivideo (https://vivideo.ai/en/blog/state-of-ai-video-creation-2026)
    1. Advantages of Text-to-Video Diffusion Models
    • The AI Video Revolution: How Text-to-Video Models Are Transforming Content Creation in 2026 (https://thedailynewsonline.com/the-ai-video-revolution-how-text-to-video-models-are-transforming-content-creation-in-2026/article_de58be2a-8f98-469a-b21d-06723eb20257.html)
    • THE BREAKTHROUGH OF TEXT-TO-VIDEO AI IN MODERN CONTENT CREATION – Sustineri (https://sustineriattorneys.com/2024/10/14/the-breakthrough-of-text-to-video-ai-in-modern-content-creation)
    • The Business Case for AI Video Production: Key Stats and Benefits (https://elai.io/business-case-for-ai)
    • 75 AI Video Statistics Marketers Need to Know (2026) (https://vivideo.ai/en/blog/ai-video-statistics-2026)
    • What is Text-to-Video Models in AI? (https://avahi.ai/glossary/text-to-video-models)
    1. Limitations of Text-to-Video Models
    • The Promise, the Pitfalls and the Price: How AI Video Generation Really Differs From Traditional Video Editing - Crews Control (https://crewscontrol.com/blog-central/the-promise-the-pitfalls-and-the-price-how-ai-video-generation-really-differs-from-traditional-video-editing)
    • A dataset of text prompts, videos and video quality metrics from generative text-to-video AI models (https://sciencedirect.com/science/article/pii/S2352340924004839)
    • The Hidden Downsides of AI-Generated Videos (https://digitalbrew.com/blog/the-hidden-downsides-of-ai-generated-videos)
    • Audiences doubt the benefits of AI-generated imagery in news are worth the risks, new study finds (https://niemanlab.org/2025/11/audiences-doubt-the-benefits-of-ai-generated-imagery-in-news-are-worth-the-risks-new-study-finds)
    1. Ethical Considerations in Text-to-Video Generation
    • 8 AI Ethics Trends That Will Redefine Trust And Accountability In 2026 (https://forbes.com/sites/bernardmarr/2025/10/24/8-ai-ethics-trends-that-will-redefine-trust-and-accountability-in-2026)
    • AI generated videos threaten 2026 mid-term election integrity | Danny Schwab posted on the topic | LinkedIn (https://linkedin.com/posts/dan-schwab-eidolon_i-have-a-huge-concern-with-ai-generated-videos-activity-7407505272081907712-DUNW)
    • Ethics of Generative AI: Key Considerations [2026] (https://aegissofttech.com/generative-ai-services/ai-ethics)
    • Artificial Intelligence And Ethics In The Era Of Digital Transformation - The Defence Horizon Journal (https://tdhj.org/blog/post/ai-ethics-transformation)
    1. Applications and Impacts of Text-to-Video Models
    • The AI Video Revolution: How Text-to-Video Models Are Transforming Content Creation in 2026 (https://thedailynewsonline.com/the-ai-video-revolution-how-text-to-video-models-are-transforming-content-creation-in-2026/article_de58be2a-8f98-469a-b21d-06723eb20257.html)
    • Practical Applications of AI Text to Video generation across industries (https://pixazo.ai/blog/text-to-video-generation-applications)
    • How AI is Impacting Social Media Content Creation - Appraisal Economics (https://appraisaleconomics.com/how-ai-is-impacting-social-media-content-creation)
    • Why AI-Generated Videos Are Becoming Popular on Social Media (https://nowxpresso.com/why-ai-generated-videos-are-becoming-popular-on-social-media-platforms)
    1. Comparison of Leading Text-to-Video Models
    • Kling vs Wan vs Seedream: Which AI Video Model Is Best in 2026? - Atlas Cloud Blog (https://atlascloud.ai/blog/kling-vs-wan-vs-seedream-ai-video-models-2026)
    • Best AI Text-to-Video Generator for 2026 : How to Choose the Right Tool | ABNewswire (https://abnewswire.com/pressreleases/best-ai-texttovideo-generator-for-2026-how-to-choose-the-right-tool_786282.html)
    • 7 Best Text-to-Video AI Generators for 2026 (Ranked & Reviewed) (https://colossyan.com/posts/7-best-text-to-video-ai-generators-for-2026-ranked-reviewed)
    • 15 AI Video Models Tested: Kling 3.0 vs Veo 3.1 vs Sora 2 (February 2026) - TeamDay.ai (https://teamday.ai/blog/best-ai-video-models-2026)
    • Top 10 Best AI Video Generators of 2026 (Tested & Compared) (https://manus.im/blog/best-ai-video-generator)
    1. Future Challenges and Scope of Text-to-Video Models
    • The AI Video Revolution: How Text-to-Video Models Are Transforming Content Creation in 2026 (https://thedailynewsonline.com/the-ai-video-revolution-how-text-to-video-models-are-transforming-content-creation-in-2026/article_de58be2a-8f98-469a-b21d-06723eb20257.html)
    • A breakthrough in AI video consistency - AdNews (https://adnews.com.au/opinion/a-breakthrough-in-ai-video-consistency)
    • The Future of Storytelling: Mastering AI Design and AI Generated Video in 2026 - Theology Pathfinder (https://derekdemars.com/blog/the-future-of-storytelling-mastering-ai-design-and-ai-generated-video-in-2026)
    • Hollywood isn't happy about the new Seedance 2.0 video generator | TechCrunch (https://techcrunch.com/2026/02/15/hollywood-isnt-happy-about-the-new-seedance-2-0-video-generator)
    • AI Video Trends: AI Video Predictions For 2026 | LTX Studio (https://ltx.studio/blog/ai-video-trends)

    Build on Prodia Today