Please Note: This schedule is in PST (Pacific Standard Time, UTC/GMT-7)
Generative AI Summit 2023
LIVE: Thursday, 20th July
LIVE: Thursday, 20th July
10:00 - 10:05
Welcome To ODSC Generative AI Summit!
10:05 - 10:50
Responsible AI In The Age Of Generative AI Panel

Generative AI has changed the landscape for the ethical and responsible use of AI in business or academic settings. This panel will highlight modern difficulties of implementing generative AI, appropriate and tangible solutions, and what we can look forward to in the future.

Responsible AI In The Age Of Generative AI Panel image
Elizabeth M. Adams
Affiliate Fellow | Stanford Institute (HAI)
Responsible AI In The Age Of Generative AI Panel image
Eli Chen
CTO & Co-Founder | Credo AI
Responsible AI In The Age Of Generative AI Panel image
Tracy Ring
CDO/Global GenAI Lead | Accenture AI
10:20 - 10:50
LLMOps for Enterprise: Key Challenges when Deploying for Production

Generative AI and LLMs are definitely buzzwords, but how can your organisation get value from these potentially game-changing technologies? Furthermore, how can you overcome the key challenges this technology is posing to organisations around the world. In this session, the Seldon technical team are here to cut through the noise and dive into the key opportunities and challenges of Generative AI and LLMs, as well as some of the best practice approaches to deploy these models at scale.

From model inference and environmental impact to audit and privacy, LLMOps is essential for the responsible deployment of LLMs at scale. At Seldon, we’re already helping organisations to deploy LLMs and are helping our customers make this easier, cheaper and faster. We’ve brought together experts from across the Seldon technical team, CTO Clive Cox, MLOps Engineer Sherif Akoush and Solutions Engineer Andrew Wilson, to tackle this tricky topic from all angles.

LLMOps for Enterprise: Key Challenges when Deploying for Production image
Clive Cox
CTO | Seldon
LLMOps for Enterprise: Key Challenges when Deploying for Production image
Andrew Wilson
Solutions Engineer | Seldon
LLMOps for Enterprise: Key Challenges when Deploying for Production image
Sherif Akoush
MLOps Engineer | Seldon
10:55 - 11:25
Unlocking the Power of Generative AI with MosaicML

Generative AI has taken the world by storm; however, challenges with technical complexity, security, and cost have limited its adoption by many organizations. In this session, we will explore how MosaicML’s full-stack platform for generative AI makes it easy and efficient for developers to build and deploy models in a secure environment. We will take a deeper look into real-world examples of how businesses are using their proprietary data to train and deploy LLMs and other generative AI models with MosaicML.

Unlocking the Power of Generative AI with MosaicML image
Hagay Lupesko
VP Engineering | MosiacML
11:00 - 11:30
The MLOps Stack in a Gen-AI World

As companies continue to embrace GenAI models, streamlining ML pipelines and productionizing models becomes crucial to make GenAI models work for you as a business. In a nutshell GenAI MLOps is a comprehensive approach to the Gen AI models pipeline that makes sure each stage of the model pipeline is ready for production and no less important monitored and maintained properly in production. If your models are doing great in experimentation but you are still trying to put all the production pieces together, This session might help you understand what’s going wrong and how to fix it. By working according to this methodology data scientists can iterate rapidly which is at the core of a successful Gen AI project.

Learn how to:
– Maintain a centralized production focused model registry
– Monitor and track your Gen AI models in your production environment
– Enhance your Gen AI capabilities and accuracy in a continuous manner during production

The MLOps Stack in a Gen-AI World image
Yuval Fernbach
Co-founder & CTO | Qwak
11:30 - 12:00
On Brains, Waves and Representations

Generative AI has made enormous strides in recent years. In this talk I will discuss how to build meaningful inductive biases into models for spatio-temporal data domains, such as video. We first generalize the idea of equivariance to a much looser and learnable constraint, and then add a prior that latent variable representations should evolve as PDEs and in particular waves. We find that this idea leads to a new form of disentangling. We also show that it is surprisingly easy to get wavelike dynamics in the latent representations and show that neurons develop a form of orientation selectivity and topography. All in all, we argue that this brain inspired inductive bias might help learning of sequence data.

On Brains, Waves and Representations image
Max Welling Ph.D
Distinguished Scientist | Microsoft Research
11:35 - 12:20
Generative Adversarial Networks 101 (Tutorial)

Generative models are at the heart of DeepFakes, and can be used to synthesize, replace, or swap attributes of images. Learn the basics of Generative Adversarial Networks, the famous GANs, from the ground up: autoencoders, latent spaces, generators, discriminators, GANs, DCGANs, WGANs, and more. The main goal of this session is to show you how GANs work: we will learn about latent spaces and how to use them to generate synthetic data while discussing implementation and training details, such as Wasserstein distance and gradient penalty. We will use Google Colab and work our way together into building and training GANs. You should be comfortable using Jupyter notebooks and Numpy, and training simple models in PyTorch.

Generative Adversarial Networks 101 (Tutorial) image
Daniel Voigt Godoy
Data Scientist And Author | Independent
12:05 - 12:35
Machines vs. Minds: Navigating the Future of Generative AI

What is generative AI? How does machine creativity relate to human creativity? What will become of us in the age of creative machines? We will delve into the essence of generative AI, drawing on comparisons to our own brains. The discussion will delve into the unique strengths of humans and machines, and explore the potential for effective collaboration between us and AI systems. A vision of the future of creativity will be presented, along with a discussion of potential risks brought on by these powerful creative machines.

Machines vs. Minds: Navigating the Future of Generative AI image
Maya Ackerman
CEO and Co-Founder | WaveAI
12:30 - 13:15
Pretrain Vision and Language Foundation Models on AWS (Tutorial)

Whether they are intimidating or exciting, high performant or expensive, the future of machine learning and artificial intelligence is clearly trending towards foundation models. In this session we’ll dive into this topic, exploring both beneficial and challenging aspects of this technology today. In particular we’ll learn about key technologies available on AWS that help you pretrain the foundation models of the future. From distributed training to custom accelerators, reward modeling to reinforcement learning, learn how to create your own state-of-the-art models.

Pretrain Vision and Language Foundation Models on AWS (Tutorial) image
Emily Webber
Principal ML Solutions Architect | AWS
12:40 - 13:10
Generative Large Language Models and Hallucinations

Generative Large Language Models (LLMs) such as GPT4 and ChatGPT have revolutionized the field of artificial intelligence with their impressive capabilities. However, a major challenge that these models present is their tendency to ‘hallucinate’ confidently, meaning they can create plausible-sounding yet false information. For businesses aiming to implement these LLMs into enterprise or end-user applications, it is crucial to address this hallucination problem to ensure the delivery of accurate, reliable information.

This talk aims to delve into the intricacies of the hallucination problem in LLMs and shed light on effective strategies to overcome it. We will explore how LLMs, in their quest to provide relevant and comprehensive responses, often generate information that sounds accurate but may not necessarily be factual or grounded in reality.

The crux of our discussion will be the innovative solution of Truth Checker models. These models serve as a second layer of scrutiny that can discern the accuracy of the information generated by LLMs. By cross-verifying the output against a vast array of trusted and verifiable sources, they ensure the veracity of the data provided by the LLMs.

Generative Large Language Models and Hallucinations image
Chandra Khatri
Co-Founder | Got It AI
13:15 - 13:45
Recent Advances in Diffusion Generative Models

Generative models are typically based on explicit representations of probability distributions (e.g., autoregressive or VAEs) or implicit sampling procedures (e.g., GANs). I will present an alternative approach based on modeling directly the vector field of gradients of the data distribution (scores) which underlies recent score-based diffusion models. This framework allows flexible architectures, requires no sampling during training or the use of adversarial training methods. Additionally, score-based diffusion generative models enable exact likelihood evaluation through connections with neural ODEs, achieving state-of-the-art sample quality and excellent likelihoods on image datasets. I will discuss numerical and distillation methods to accelerate sampling and their application to inverse problem solving.

Recent Advances in Diffusion Generative Models image
Stefano Ermon Ph.D
Assistant Professor | Stanford University
13:20 - 13:50
Generative AI with Hugging Face

Generative AI is a rapidly growing field, but it can be shrouded in mystery and jargon, making it difficult for non-technical professionals to understand. This talk aims to demystify generative AI and introduce you to building generative AI models and applications with Hugging Face open-source solutions. By the end of this talk, you will better understand how generative AI works and how it can be applied in various industries, such as marketing and customer service. They will also have a high-level understanding of the underlying models, which will enable them to make more informed decisions about using generative AI in their businesses.

Generative AI with Hugging Face image
Julien Simon
Chief Evangelist | Hugging Face
13:50 - 14:20
BloombergGPT: A Large Language Model for Finance

The use of NLP in the realm of financial technology is broad and complex, with applications ranging from sentiment analysis and named entity recognition to question answering. Large Language Models (LLMs) have been shown to be effective on a variety of tasks; however, no LLM specialized for the financial domain has been reported in literature. In this work, we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Additionally, we explain our modeling choices, training process, and evaluation methodology.

BloombergGPT: A Large Language Model for Finance image
Ozan Irsoy Ph.D
Research Scientist | Bloomberg
13:55 - 14:25
How To Train Your Vicuna – Finetuning, Serving, and Evaluating LLMs In The Wild

Since Meta released the Llama weights and OpenAI announced GPT-4, the landscape of open large language models (LLMs) are seeing rapid changes every day.

In this talk, I will talk about our recent experience in finetuning, evaluating, and serving the chatbot, Vicuna, which is considered a high-quality open-source chatbot closest to ChatGPT (GPT-3.5-turbo) even today. I will briefly explain how we curated a high-quality dataset and finetuned llama to Vicuna. I will then discuss how we serve Vicuna, together with many other chatbots in the Chatbot Arena (, achieving high throughput and low latency, given only a limited amount of university-donated GPUs. I’ll also discuss emerging system and ML challenges in serving and evaluating LLMs, and our ongoing effort. This is joint work with members of the LMSYS Org team at

How To Train Your Vicuna – Finetuning, Serving, and Evaluating LLMs In The Wild image
Hao Zhang
Assistant Professor | UCSD
14:25 - 14:55
Matching Identities Using Large Language Models

Almost every application in the world depends on understand the relationships between people and companies. From master data management to anti-money laundering to deduplicating your salesforce instance, many applications depend on the capacity to efficiently search through databases of personal or corporate names and understand who is likely the same entity. A single individual can be referred to by various name variants, which may be written using different scripts, aliases, or nicknames. In this talk, we will introduce a new method for name matching using a large language model that works on the byte level, which we fine-tuned to embed personal names in a vector space for name retrieval tasks. We will outline the fine-tuning process, and discuss results from using test sets with multiple scripts, and comparing our LLM model to some strong baseline models.”

Matching Identities Using Large Language Models image
Catherine Havasi
Chief Of Innovation | Babel Street
Matching Identities Using Large Language Models image
Kfir Bar
Chief Scientist | Babel Street
14:30 - 15:00
Text to Insights: Building Real-Time Analytics Systems with Generative AI

Text to SQL is a long-standing challenge in the NLP community, but advancements in Generative AI, particularly large language models (LLMs), have brought us closer than ever before. Join us to explore the complexities of building Text to SQL systems, focusing on open models.

We will discuss various approaches for constructing these systems, including LLM types, finetuning methods, and data augmentation techniques for training optimal models that generate SQL from text and describe query results. Discover how to avoid pitfalls by using Retrieval augmented generation and providing context with metadata.

Moreover, we will demonstrate how to integrate a text to SQL system with Apache Spark structured streaming to create a real-time insight engine that maintains data freshness. Throughout the session, we will guide you through the end-to-end process of building such a system using open source tools and models.

Text to Insights: Building Real-Time Analytics Systems with Generative AI image
Avinash Sooriyarachchi
Solutions Architect | Databricks
Text to Insights: Building Real-Time Analytics Systems with Generative AI image
Dillon Bostwick
Senior Solutions Architect | Databricks
15:00 - 15:45
Government Policy in Generative AI Panel

With the explosion of AI over the past year – and specifically with generative AI – laws and regulations have been forced to adapt in a short amount of time. In this panel, the experts will discuss the difficulties faced with the rise of Generative AI, what’s being done to address this need for improved governance, and what does the future hold for laws and regulations surrounding artificial intelligence.

Government Policy in Generative AI Panel image
Brian Drake
Federal CTO | Accrete AI
Government Policy in Generative AI Panel image
David Danks Ph.D
Professor | UC San Diego
Government Policy in Generative AI Panel image
Eric Xing Ph.D
Professor | Carnegie Mellon
15:05 - 15:35
Accelerating Virtual Twins using Generative AI and Synthetic Clinical Trial Data

Clinical trial data remain mainly siloed given concerns of patient privacy and clinical trial sponsor identity disclosure. Advances in generative AI have enabled the creation of generative adversarial networks and variational autoencoders to generate synthetic data using real data where these synthetic datasets are able to mimic the properties and trends of real data without disclosing patient-specific information. In the context of healthcare datasets, synthetic data presents a “Virtual Twin” for real clinical trial data, preserving the clinical insights, endpoints and outcomes of interest present in the real clinical trial data while most importantly, protecting patient privacy and trial sponsor anonymity. In this talk, we will discuss (i) open-source generative models to create synthetic data, (ii) cross-industry use cases for synthetic data with a specific focus on healthcare (data augmentation, test data creation, ML model improvements), and (iii) suite of metrics to evaluate synthetic data quality focusing on fidelity, utility, and privacy. The key-takeaways will focus on how synthetic and generative AI can accelerate the growth of a healthy ecosystem for data sharing and continued innovation in healthcare and other industries.

Accelerating Virtual Twins using Generative AI and Synthetic Clinical Trial Data image
Afrah Shafquat Ph.D
Senior Data Scientist II | Medidata AI
15:50 - 16:00
Wrap Up Summary
Select date to see events.

New to Summit AI? Join Our Mailing List to be Notified of the Schedule Launch


Bridge the Gap between Concept and Practice and Put Theory Into Action

Get hands-on training in large language models, Generative AI, and More at ODSC East 2024

Early Bird Deal

Offer Ends In:

Register now & save 60%
Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google