Host Francesca Amiker sits down with directors Joe and Anthony Russo, producer Angela Russo-Otstot, stars Millie Bobby Brown and Chris Pratt, and more to uncover how family was the key to building the emotional core of The Electric State . From the Russos’ own experiences growing up in a large Italian family to the film’s central relationship between Michelle and her robot brother Kid Cosmo, family relationships both on and off of the set were the key to bringing The Electric State to life. Listen to more from Netflix Podcasts . State Secrets: Inside the Making of The Electric State is produced by Netflix and Treefort Media.…
Welcome to the Data Science Conversations Podcast hosted by Damien Deighan and Dr Philipp Diesinger. We bring you interesting conversations with the world’s leading Academics working on cutting edge topics with potential for real world impact. We explore how their latest research in Data Science and AI could scale into broader industry applications, so you can expand your knowledge and grow your career. Every 4 or 5 episodes we will feature an industry trailblazer from a strong academic background who has applied research effectively in the real world. Podcast Website: www.datascienceconversations.com
Welcome to the Data Science Conversations Podcast hosted by Damien Deighan and Dr Philipp Diesinger. We bring you interesting conversations with the world’s leading Academics working on cutting edge topics with potential for real world impact. We explore how their latest research in Data Science and AI could scale into broader industry applications, so you can expand your knowledge and grow your career. Every 4 or 5 episodes we will feature an industry trailblazer from a strong academic background who has applied research effectively in the real world. Podcast Website: www.datascienceconversations.com
In this episode, we had the privilege of speaking with Walid Mehanna , Chief Data and AI Officer at Merck Group. Walid shares deep insights into how large, complex organizations can scale data and AI and create lasting impact through thoughtful leadership. As Chief Data & AI Officer of Merck Group, Walid led the Merck Data & AI Organization, delivering strategy, value, architecture, governance, engineering, and operations across the whole company globally. Hand in hand with Merck’s business sectors and their data offices, we harnessed the power of Data & AI. Walid is glad to be part of Merck as another curious mind dedicated to human progress.…
In our latest episode of the Data Science Conversations Podcast, we spoke with Christoph Sporleder, Managing Partner at Rewire, about the evolving role of consulting in the data and AI space. This conversation is a must listen for anyone dealing with the challenges of integrating AI into business processes or considering an AI project with an external consulting firm. Christoph draws from decades of experience, offering practical advice and actionable insights for organizations and practitioners alike. Key Topics Discussed 1. Evolution of Data and Cloud Computing The shift from local computing to cloud technologies, enabling broader data integration and advanced analytics, with the rise of IoT and machine data. 2. Data Management Challenges Discussion on the evolution from data warehouses to data lakes and the emerging concept of data mesh for better governance and scalability. 3. Importance of Strategy in AI Why a clear strategy is crucial for AI adoption, including aligning organizational leadership and identifying impactful use cases. 4. Sectoral Adoption of Data and AI Differences in adoption across sectors, with early adopters in finance and insurance versus later adoption in manufacturing and infrastructure. 5. Consulting Models and Engagement Insights into consulting engagement types, including strategy consulting, system integration, and body leasing, and their respective challenges and benefits. 6. Challenges in AI Implementation Common pitfalls in AI projects, such as misalignment with business goals, inadequate infrastructure planning, and siloed lighthouse initiatives. 7. Leadership’s Role in AI Success The critical need for senior leadership commitment to drive AI adoption, ensure process integration, and manage organizational change. 8. Effective Collaboration with Consultants Best practices for successful partnerships with consultants, including aligning on objectives, managing personnel transitions, and setting clear engagement expectations. 9. Future Trends in Data and AI Emerging trends like componentized AI architectures, Gen AI integration, and the growing focus on embedding AI within business processes. 10. Tips for Managing Long-Term Projects Strategies for handling staff rotations and maintaining project continuity in consulting engagements, emphasizing planning and communication.…
KP Reddy, founder and managing partner of Shadow Ventures, explains how AI is set to redefine the startup landscape and the venture capital model. KP shares his unique perspective on the rapidly evolving role of AI in entrepreneurship, offering insights into: GENAI adoption in large companies is still limited How AI is empowering leaner, more efficient startups The potential for AI to disrupt traditional venture capital strategies The emergence of new business models driven by AI capabilities Real-world applications of AI in industries like construction, life sciences, and professional services…
Early Interest in Generative AI Martin's initial exposure to Generative AI in 2016 through a conference talk in Milano, Italy, and his early work with Generative Adversarial Networks (GANs). Development of GANs and Early Language Models since 2016 The evolution of Generative AI from visual content generation to text generation with models like Google's Bard and the increasing popularity of GANs in 2018. Launch of GenerativeAI.net and Online Course Martin's creation of GenerativeAI.net and an online course, which gained traction after being promoted on platforms like Reddit and Hacker News. Defining Generative AI Martin’s explanation of Generative AI as a technology focused on generating content, contrasting it with Discriminative AI, which focuses on classification and selection. Evolution of GenAI Technologies The shift from LSTM models to Transformer models, highlighting key developments like the "Attention Is All You Need" paper and the impact of Transformer architecture on language models. Impact of Computing Power on GenAI The role of increasing computing power and larger datasets in improving the capabilities of Generative AI Generative AI in Business Applications Martin’s insights into the real-world applications of GenAI, including customer service automation, marketing, and software development. Retrieval Augmented Generation (RAG) Architecture The use of RAG architecture in enterprise AI applications, where documents are chunked and queried to provide accurate and relevant responses using large language models. Technological Drivers of GenAI The advancements in chip design, including Nvidia’s focus on GPU improvements and the emergence of new processing unit architectures like the LPU. Small vs. Large Language Models A comparison between small and large language models, discussing their relative efficiency, cost, and performance, especially in specific use cases. Challenges in Implementing GenAI Systems Common challenges faced in deploying GenAI systems, including the costs associated with training and fine-tuning large language models and the importance of clean data. Measuring GenAI Performance Martin’s explanation of the complexities in measuring the performance of GenAI systems, including the use of the Hallucination Leaderboard for evaluating language models. Emerging Trends in GenAI Discussion of future trends such as the rise of multi-agent frameworks, the potential for AI-driven humanoid robots, and the path towards Artificial General Intelligence (AGI).…
In this episode, we sit down with Steve Orrin, Federal Chief Technology Officer at Intel Corporation. Steve shares his extensive experience and insights on the transformative power of AI and its parallels with past technological revolutions. He discusses Intel’s pioneering role in enabling these shifts through innovations in microprocessors, wireless connectivity, and more. Steve highlights the pervasive role of AI in various industries and everyday technology, emphasizing the importance of a heterogeneous computing architecture to support diverse AI environments. He talks about the challenges of operationalizing AI, ensuring real-world reliability, and the critical need for robust AI security. Confidential computing emerges as a key solution for protecting AI workloads across different platforms. The episode also explores Intel’s strategic tools like oneAPI and OpenVINO, which streamline AI development and deployment. This episode is a must-listen for anyone interested in the evolving landscape of AI and its real-world applications. Intel's Legacy and Technological Revolutions Historical parallels between past tech revolutions (PC era, internet era) and current AI era. Intel's contributions to major technological shifts, including the development of wireless technology, USB, and cloud computing. AI's Current and Future Landscape AI's pervasive role in everyday technology and various industries. Importance of computing hardware in facilitating AI advancements. AI's integration across different environments: cloud, network, edge, and personal devices. Intel's Approach to AI Focus on heterogeneous computing architectures for diverse AI needs. Development of software tools like oneAPI and OpenVINO to enable cross-platform AI development. Challenges and Solutions in AI Deployment Scaling AI from lab experiments to real-world applications. Ensuring AI security and trustworthiness through transparency and lifecycle management. Addressing biases in AI datasets and continuous monitoring for maintaining AI integrity. AI Security Concerns Protection of AI models and data through hardware security measures like confidential computing. Importance of data privacy and regulatory compliance in AI deployments. Emerging threats such as AI model poisoning, prompt injection attacks, and adversarial attacks. Innovations in AI Hardware and Software Confidential computing as a critical technology for securing AI. Research into using AI for chip layout optimization and process improvements in various industries. Future trends in AI applications, including generative AI for fault detection and process optimization. Collaboration and Standards in AI Security Intel's involvement in developing industry standards and collaborating with competitors and other stakeholders. The role of industry forums and standards bodies like NIST in advancing AI security. Advice for Aspiring AI Security Professionals Importance of hands-on experience with AI technologies. Networking and collaboration with peers and industry experts. Staying informed through industry news, conferences, and educational resources. Exciting Developments in AI Fusion of multiple AI applications for complex problem-solving. Advancements in AI hardware, such as AI PCs and edge devices. Potential transformative impacts of AI on everyday life and business operations.…
In this episode we talk to Kirk Marple about the power of Knowledge Graphs when combined with GenAI models. Kirk explained the growing relevance of knowledge graphs in the AI era, the practical applications, their integration with LLMs, and the future potential of Graph RAG. Kirk Marple a veteran of Microsoft and General Motors, Kirk has spent the last 30 years in software development and data leadership roles. He also successfully exited the first startup he founded, RadiantGrid, acquired by Wohler Technologies . Now, as the technical founder and CEO of Graphlit , Kirk and his team are streamlining the development of vertical AI apps with their end-to-end, cloud based offering that ingests unstructured data and leverages retrieval augmented generation to improve accuracy, domain specificity, adaptability, and context understanding – all while expediting development. Episode Summary - Introduction to Knowledge Graphs: Knowledge graphs extract relationships between entities like people, places, and things, facilitating efficient information retrieval. They represent intricate interactions and interrelationships, enabling users to "walk the graph" and uncover deeper insights. Importance in the AI Era: Knowledge graphs enhance data retrieval and filtering, crucial for feeding accurate data into large language models (LLMs) and multimodal models. They provide an additional axis for information retrieval, complementing vector search. Industry Use Cases: Commonly used in customer data platforms and CRM models to map relationships within and between companies. Knowledge graphs can convert complex datasets into structured, easily queryable formats. Challenges and Limitations: Familiarity with graph databases and the ETL process for graph data integration is still developing. Graph structures are less common and more complex than traditional relational models. Integrating Knowledge Graphs with LLMs: Knowledge graphs enrich data integration and semantic understanding, adding context to text retrieved by LLMs. They can help reduce hallucinations in LLMs by grounding responses with more accurate and comprehensive context. Graph RAG (Retrieval Augmented Generation): Combines knowledge graphs with RAG to provide additional context for LLM-generated responses. Allows retrieval of data not directly cited in the text, enhancing the breadth of information available for queries. Scalability and Efficiency: Effective graph database architectures can handle large-scale graph data efficiently. Graph RAG requires a robust ingestion pipeline and careful management of data freshness and retrieval processes. Future Developments: Growing interest and implementation of knowledge graphs and Graph RAG in various industries. Potential for new tools and standardization efforts to make these technologies more accessible and effective. Graphlit: Simplifying Knowledge Graphs: The platform focuses on simplifying the creation and use of knowledge graphs for developers. Provides APIs for easy integration, supporting domain-specific vertical AI applications. Offers a unified pipeline for data ingestion, extraction, and knowledge graph construction. Open Source and Community Contributions: Recommendations for libraries and projects in the knowledge graph space. Notable contributors and projects include data extraction libraries and AI agent initiatives.…
At LanguageTool, Bartmoss St Clair (Head of AI) is pioneering the use of Large Language Models (LLMs) for grammatical error correction (GEC), moving away from the tool's initial non-AI approach to create a system capable of catching and correcting errors across multiple languages. LanguageTool supports over 30 languages, has several million users, and over 4 million installations of its browser add-on, benefiting from a diverse team of employees from around the world. Episode Summary - LanguageTool decided against using existing LLMs like GPT-3 or GPT-4 due to cost, speed, and accuracy benefits of developing their own models, focusing on creating a balance between performance, speed, and cost. The tool is designed to work with low latency for real-time applications, catering to a wide range of users including academics and businesses, with the aim to balance accurate grammar correction without being intrusive. Bartmoss discussed the nuanced approach to grammar correction, acknowledging that language evolves and user preferences may vary, necessitating a balance between strict grammatical rules and user acceptability. The company employs a mix of decoder and encoder-decoder models depending on the task, with a focus on contextual understanding and the challenges of maintaining the original meaning of text while correcting grammar. A hybrid system that combines rule-based algorithms with machine learning is used to provide nuanced grammar corrections and explanations for the corrections, enhancing user understanding and trust. LanguageTool is developing a generalized GEC system, incorporating legacy rules and machine learning for comprehensive error correction across various types of text. Training models involve a mix of user data, expert-annotated data, and synthetic data, aiming to reflect real user error patterns for effective correction. The company has built tools to benchmark GEC tasks, focusing on precision, recall, and user feedback to guide quality improvements. Introduction of LLMs has expanded LanguageTool's capabilities, including rewriting and rephrasing, and improved error detection beyond simple grammatical rules. Despite the higher costs associated with LLMs and hosting infrastructure, the investment is seen as worthwhile for improving user experience and conversion rates for premium products. Bartmoss speculates on the future impact of LLMs on language evolution, noting their current influence and the importance of adapting to changes in language use over time. LanguageTool prioritizes privacy and data security, avoiding external APIs for grammatical error correction and developing their systems in-house with open-source models.…
In this enlightening episode, Dr. Julia Stoyanovich delves into the world of responsible AI, exploring the ethical, societal, and technological implications of AI systems. She underscores the importance of global regulations, human-centric decision-making, and the proactive management of biases and risks associated with AI deployment. Through her expert lens, Dr. Stoyanovich advocates for a future where AI is not only innovative but also equitable, transparent, and aligned with human values. Julia is an Institute Associate Professor at NYU in both the Tandon School of Engineering, and the Center for Data Science. In addition she is Director of the Center for Responsible AI also at NYU. Her research focuses on responsible data management, fairness, diversity, transparency, and data protection in all stages of the data science lifecycle. Episode Summary - The Definition of Responsible AI Example of ethical AI in the medical world - Fast MRI technology Fairness and Diversity in AI The role of regulation - What it can and can’t do Transparency, Bias in AI models and Data Protection The dangers of Gen AI Hype and problematic AI narratives from the tech industry The impotence of humans in ensuring ethical development Why “Responsible AI” is actually a bit of a misleading term What Data & AI leaders can do to practise Responsible AI…
Luis Moreira-Matias is Senior Director of Artificial Intelligence at sennder, Europe’s leading digital freight forwarder. At sennder, Luis founded sennAI: sennder’s organization that oversees the creation (from R&D to real-world productization) of proprietary AI technology for the road logistics industry. During his 15 years of career, Luis led 50+ FTEs across 4+ organisations to develop award-winning ML solutions to address real-world problems in various fields such as e-commerce, travel, logistics, and finance. Luis holds a Ph.D. in Machine Learning from the U. Porto, Portugal. He possesses a world-class academic track with high impact publications at top tier venues in ML/AI fundamentals, 5 patents and multiple keynotes worldwide - ranging from Brisbane (Australia) to Las Palmas (Spain).…
In this episode Tarush Aggarwal, formerly of Salesforce and WeWork is back on the podcast to discuss the evolution of the Semantic layer and how that can help practitioners get results from LLMs. We also discuss how smaller ELMs (expert language models) might be the future when it comes to consistent reliable outputs from Generative AI and also the impact of all of this on traditional BI tools.…
In this episode Patrick McQuillan shares his innovative Biological Model - a concept you can use to enhance data outcome in large enterprises. The concept takes the idea that the best way to design a data strategy is to align it closely with a biological system. He discusses the power of centralized information, importance of data governance, and the necessity for a common performance narrative across an organization. Episode Summary - - Biological Model Concept - Centralized vs. Decentralized Data - Data Collection and Maturity - Horizontal translation layer - Partnership with vertical leaders - Curated data layers - Data dictionary for consistency - Focusing on vital metrics - Data Flow in Organizations - Biological Model Governance - Overcoming Inconsistency and Inaccuracy…
In this episode Heidi Hurst returns to talk to us about how in her current role at Pachama she is using the power of machine learning to fight climate change. She discusses her work in measuring the capacity of existing forests and reforestation projects using satellite imagery. Episode Summary 1. The importance of carbon credits verification in mitigating climate change 2. How Pachama is using machine learning and satellite imagery to verify carbon projects 3. Three types of carbon projects: avoided deforestation, reforestation, and improved forest management 4. Challenges in using satellite imagery to measure the capacity of existing forests 5. The role of multispectral imaging in measuring density of forests 6. Challenges in collecting data from dense rainforests and weather obstructions 7. The impact of machine learning on scaling up carbon verification 8. Advancements in the field of satellite imaging, particularly in small satellite constellations…
Ágnes Horvát is an Assistant Professor in Communication and Computer Science at Northwestern University. Her work focuses on understanding how online networks induce biased information production, sharing and processing across digital platforms. - The new Post-normal era for science - Having an awareness of the context and values that impact scientific research Where is science communication in relation to digital platforms? - Scholars work hard on discovering scientific findings, but information doesn’t always reach the public appropriately. How to communicate scientific research - it’s not just about communicating with scientists and general audiences. News needs to reach policymakers and governments too for real change. The production of scientific research has exploded recently thanks to decision-making demands - and the pandemic had a lot to do with this. Scientists were under pressure to carry out research quickly and at the expense of quality. Misinformation can have detrimental consequences - even leading to vaccine hesitancy in some communities. The surprising effect of retracting papers - papers that get retracted in the future are more likely to receive more engagement before getting withdrawn. Why are paper retractions on the rise? - again, the recent pandemic has caused an increase in retractions. Is social media helping or hindering science research? - while the platforms are helping to spread real news, social media also helps the spread of false information. As long as you have quality data and robust trends - regardless of the method, you will identity that trend. Reducing the problem of miscommunication - with whom does the responsibility lie?…
Modern Data Infrastructures and platforms store huge amounts of multidimensional data. But - data pipelines frequently break and a machine learning algorithm's performance is only as good as the quality and reliability of the data itself. In this episode we are joined by Lior Gavish and Ryan Kearns of Monte Carlo, to talk about how the new concept of Data Observability is advancing Data Reliability and Data Quality at Scale. Episode Summary A overview of Data Reliability/Quality and why it is so critical for organisations The limitations of traditional approaches in the area of Data Reliability Data observability and why it is different to traditional approaches to Data Quality The 5 Pillars of Data Observability How to improve data reliability/quality at scale and generate trust in data with stakeholders. How observability can lead to better outcomes for Data Science and engineering teams? Examples of data observability use cases in industry Overview of O’Reilly’s upcoming book, The Fundamentals of Data Quality.…
In this episode we are joined by Julia Stoyanovich from NYU, to talk about her work into how AI is being used in the hiring process. Whether you are responsible for hiring on behalf of a business or are a job seeker, you will find this podcast very interesting, but for very different reasons. Episode Summary Algorithmic decision making in the hiring process - what does that mean for businesses and job seekers? The hiring process - the funnel effect. Lack of public disclosure about the use of algorithmic tools as part of the talent acquisition pipeline. Are job seekers being unfairly screened out of the hiring process? How AI based implementations of psychometric instruments are used today. Is it possible to measure a person’s personality based on data alone? Do these systems remove bias and discrimination from the hiring process? Testing the stability and consistency of these algorithmic systems. Vendors of systems and their lack of testing / recognising the issues. Are new laws needed so the hiring process is fairer and more transparent? What does the future of hiring look like - fewer AI systems and more human intervention?…
In this episode we are joined by Professor Maurizio Porfiri from NYU, to talk about his latest academic research which is using data science to uncover why sales of guns in the USA increase after a mass shooting event. His interest and research was borne from a very personal experience 14 years ago when he experienced a mass shooting event at Virginia Tech where he was studying. Researching Complex Systems Virginia Tech Mass Shooting event and its impact on Maurizio What is the relationship between mass shooting events and the purchase of guns? Analysis of time series data - 70 mass shootings in around 20 years Can media coverage on mass shootings shape public opinion, thereby influencing firearm acquisition? Examining the correlation between three distinct datasets What are the causes of increased gun sales in the aftermath of mass shooting events? Differences in the data at State level V National level? Researching the complex firearm ecosystem with all its pieces - prevalence, violence and regulation…
In this episode we are joined by Perry Marshall to talk about his latest scientific paper entitled “Biology Transcends the Limits of Computation”. We also discuss his $10 million Evolution 2.0 Science Prize, which is the largest prize in the world in science currently. His paper pushes the boundaries in the field of evolutionary biology and his science prize is driving some truly fascinating and thought provoking implications for the development of strong AI.…
In this episode we are joined by Arnon Houri Yafin, an Israeli entrepreneur who is the founder of a company called Zzapp Malaria, which recently won the AI XPRIZE sponsored by IBM Watson. Their work in using AI to eliminate malaria in Africa is both interesting and inspirational. Episode Summary Moving from malaria control to malaria elimination How Zzapp Malaria started and how investors were attracted The use of drones, satellite imagery, topography, rain/humidity data and a new mobile app The development of small neural networks to identify the potential for small water bodies How IBM Watson assisted with funding but also the machine learning models The use of biology agents rather than chemicals to treat stagnant water bodies The NGO project in Ghana - reduced mosquitos by 60% in 100 days Latest and biggest operation to date in São Tomé and Príncipe The direct impact on the GDP of a country’s economy as a result of eliminating malaria How malaria and poverty are interconnected Winning the $3 million XPrize and how the money will be used How you can help by supporting Malaria No More and Only Nets…
In this episode we are joined by the Director of AI and Data Operations at XPRIZE whose career path into the world of AI is fascinating. Neama Dadkhahnikoo shares his journey from his early days at Boeing back in 2005, through start up ventures, Techspert and Caregivers Direct, and re-training right through to the present day at XPRIZE. He reveals how anyone has the potential to make a real difference in using AI to help solve real world problems. The history of challenge prize competitions and how the British Monarchy were involved Challenge prize is philanthropy with capitalism thrown in How a clockmaker determined longitude to win the first ever prize How industries are born out of successful challenge prize competitions The impact of XPRIZE on the commercial Space industry The ethos of XPRIZE- a global positive future movement How the challenges are chosen The IBM Watson AI XPRIZE, a $5 million challenge for teams to use AI for good How to monitor the after prize impact Three AI XPRIZE finalists - Aifred Health, Marinus Analytics and ZzappMalaria How was AI defined for the challenge? How to use and get involved with AI for good…
In this episode we are joined by an industry veteran who has worked for some of the biggest names in the enterprise Data world. Tarush Aggarwal shares his journey from his early days at Salesforce and then WeWork, right through to the present day. He reveals how to set Data Science & Engineering up for success in both small and large organisations. Episode Summary How Salesforce leveraged data to grow their company fast How Mark Benioff ensured his vision was executed effectively at Salesaforce.com What it was like to join WeWork at the start of their data function The differences between how WeWork and Salesforce.com leveraged data How to structure a Data function - centralised V decentralised V hybrid model How Spotify structured their data team to scale the business Can a Fortune 500 business make the hybrid model work? The fundamentals for a new start up - how to get building a data function right Product company versus service delivery company - how does that affect the data function structure? What’s next for data privacy? The 5x Company - entry-level training program, what it is and who its for? Data Mastermind groups - are they the way forward?…
In this episode we discuss the rapidly developing field of Satellite Imaging. Our guests on this show are Heidi Hurst & Jerry He. They are two remarkable industry Data Scientists with a strong academic pedigree and experience in the field of Satellite Image Processing. Heidi is based in Washington DC and Jerry is based in New York. Join us as they discuss their journey into Satellite Imaging and share with us the latest developments in this fascinating and evolving area of Data Science. Episode Summary Why is satellite image processing such an exciting field? What data sources is satellite image data based on? What are the challenges in using satellite image data? Sensors used in satellite imaging Methods used in satellite imaging - Image Processing, Deep Learning, CNN The Socio-economic applications Industry Applications for Satellite Imaging - Agriculture, Supply Chain monitoring, Sales Prediction, Insurance The Future of Satellite Imaging RESOURCES: Cool Visual - One Hour of active Satellites orbiting Earth: https://www.reddit.com/r/dataisbeautiful/comments/j7pj62/oc_one_hour_of_active_satellites_orbiting_earth/?utm_source=share&utm_medium=ios_app&utm_name=iossmf DOTA - https://captain-whu.github.io/DOTA/ - Open dataset for object detection in overhead imagery COwC - https://gdo152.llnl.gov/cowc/ - Cars Overhead with Context - specific detection dataset for car counting algorithms xView - http://xviewdataset.org/ - dataset put together by the National Geospatial Intelligence Agency for an object detection challenge, including some particularly rare classes…
Every so often on the podcast we will bring you something a little bit different. This episode is part two of our conversation with Esports Legends TLO & MaNa. They are professional Starcraft II players and they tell us the story of what it was like to compete against Google DeepMinds AlphaStar AI agent. This is a fascinating discussion about the technical capability of AI agents and about the psychology involved when Humans take on the machines. Episode Summary The live event Rematch against AlphaStar The game plan for the rematch Trying to match AlphaStar The importance of the human aspect to the future of Starcraft II What TLO & MaNa learned from AlphaStar The importance of human intervention to prevent mistakes from the AI What impact could AlphaStar have on improving Esports players? Did AlphaStar Show any signs of being able to improvise? Mistakes AlphaStar made Why limiting the abilities of an AI might make it smarter Can AI develop intuition in the future? Resources: Deepmind Alphastar Videos: https://deepmind.com/research/open-source/alphastar-resources TLO Profile: - https://liquipedia.net/starcraft2/TLO MaNa Profile: - https://liquipedia.net/starcraft2/MaNa…
Every so often on the podcast we will bring you something that is a little bit different. This episode is part one of a conversation with Esports Legends TLO & MaNa. They are professional Starcraft II players and they tell us the story of what it was like to compete against Google DeepMinds AlphaStar AI agent. This is a fascinating discussion about the technical capability of AI agents and about the psychology involved when Humans take on the machines. Episode Summary A typical day for an esports athlete The similarities and differences between Esports and normal sports The importance of actions and screens per minutes for high performance The role of gaming in driving development of AI Evolution of gaming AI agents The challenges in competing against a blackbox Exploiting the game playing tendencies of the AlphaStar The psychological pressure of playing against AlphaStar Underestimating the ability of AlphaStar Resources: Deepmind Alphastar Videos: https://deepmind.com/research/open-source/alphastar-resources TLO Profile: - https://liquipedia.net/starcraft2/TLO MaNa Profile: - https://liquipedia.net/starcraft2/MaNa…
This is Part Two of our conversation about Deep Fakes with two experts in their respective fields. We talk to Dr Eileen Culloty of the Institute for Future Media and Journalism at Dublin City University and Dr Stephane Lathuiliere of Telecom Paris. EPISODE SUMMARY: The implications of disinformation on the media The role of fact checking How to deal with Deep fakes What Adobe and Microsoft are doing about Deep Fakes The future role of GAN’s in detecting Deep Fakes Resources: Video of First Order Motion Model For Video Animation: https://www.youtube.com/watch?v=u-0cQ-grXBQ&ab_channel=AliaksandrSiarohin PROVENANCE program: https://fujomedia.eu/provenance/…
This is Part one of our conversation about Deep Fakes with two experts in their respective fields. We talk to Dr Eileen Culloty of the Institute for Future Media and Journalism at Dublin City University and Dr Stephane Lathuiliere of Telecom Paris. Stephane reveals what is possible and what is not possible technically with current Deep Fakes Technology. Eileen helps us cut through the hype about Deep Fakes and tells us about their real world social and political impact. EPISODE SUMMARY: Short History of Media Manipulation The breakthroughs in Deep Learning enabling current Deep Fake Technology. The role of increased data availability in generating Deep Fakes. Why Cheap fakes are still a bigger problem than Deep Fakes. How the First Order Motion Model has advanced the field of Image Animation. Positive use cases of Deep Fake Technology. The Future of Image Animation/Deep Fake Technology. Challenges for media and Journalism in the age of Deep Fake Technology. Societal impact of Disinformation and fake news content. Deep Fakes V Cheap Fakes during COVID Pandemic. RESOURCES: Video of First Order Motion Model For Video Animation: https://www.youtube.com/watch?v=u-0cQ-grXBQ&ab_channel=AliaksandrSiarohin PROVENANCE program: https://fujomedia.eu/provenance/…
This is Part 2 of our conversation with Professor Philipp Koehn of Johns Hopkins University. Professor Koehn is one of the world’s leading experts in the field of Machine Translation & NLP. In this episode we delve into commercial applications of machine translation, open source tools available and also take a look into what to expect in the field in the future. Episode Summary: Typical datasets used for training models The role of infrastructure and technology in Machine Translation How the academic research in Machine Translation has manifested into industry applications Overview of what’s available in Open source tools for Machine Translation The Future of Machine Translation and can it pass a Turing test Resources: Philipp Koehn latest book - Neural Machine Translation - Amazon link: https://www.amazon.com/Neural-Machine-Translation-Philipp-Koehn/dp/1108497322 Omniscien Technologies - Leading Enterprise Provider of machine translation services: https://omniscien.com/ Open Source tools: - Fairseq https://fairseq.readthedocs.io/en/latest/ - Marian https://marian-nmt.github.io/ - OpenNMT https://opennmt.net/ - Sockeye https://awslabs.github.io/sockeye/ Translated texts (parallel data) for training: - OPUS http://opus.nlpl.eu/ - Paracrawl https://paracrawl.eu/ Two papers mentioned about excessive use of computing power to train NLP models: - GPT-3 https://arxiv.org/abs/2005.14165 - Roberta https://arxiv.org/abs/1907.11692…
Professor Philipp Koehn of Johns Hopkins University discusses the evolution of machine translation and the fundamentals for using Neural Networks to deliver Machine translation. Episode Summary: Philipp Koehn Bio What is Machine Translation? Adequacy & Fluency How to Quantify the performance of Machine Translation models The Transition from Statistical approaches to using Neural Networks for translation Validating Outputs of models What can go wrong with Machine Translation? Resources: Philipp Koehn latest book - Neural Machine Translation - Amazon link: https://www.amazon.com/Neural-Machine-Translation-Philipp-Koehn/dp/1108497322…
Your co-Hosts Damien Deighan and Philipp Diesinger discuss the Data Science conversations Podcast concept. You will find out what to expect from the show in the coming weeks.
مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.