Player FM - Internet Radio Done Right
521 subscribers
Checked 12h ago
تمت الإضافة منذ قبل ten عام
المحتوى المقدم من The New Stack Podcast and The New Stack. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة The New Stack Podcast and The New Stack أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
Player FM - تطبيق بودكاست
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
انتقل إلى وضع عدم الاتصال باستخدام تطبيق Player FM !
The New Stack Podcast
وسم كل الحلقات كغير/(كـ)مشغلة
Manage series 75006
المحتوى المقدم من The New Stack Podcast and The New Stack. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة The New Stack Podcast and The New Stack أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
…
continue reading
894 حلقات
وسم كل الحلقات كغير/(كـ)مشغلة
Manage series 75006
المحتوى المقدم من The New Stack Podcast and The New Stack. يتم تحميل جميع محتويات البودكاست بما في ذلك الحلقات والرسومات وأوصاف البودكاست وتقديمها مباشرة بواسطة The New Stack Podcast and The New Stack أو شريك منصة البودكاست الخاص بهم. إذا كنت تعتقد أن شخصًا ما يستخدم عملك المحمي بحقوق الطبع والنشر دون إذنك، فيمكنك اتباع العملية الموضحة هنا https://ar.player.fm/legal.
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
…
continue reading
894 حلقات
All episodes
×T
The New Stack Podcast


1 VMware's Kubernetes Evolution: Quashing Complexity 30:40
30:40
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب30:40
Without this, developers waste time managing infrastructure instead of focusing on code. VMware addresses this with VCF, a pre-integrated Kubernetes solution that includes components like Harbor, Valero, and Istio, all managed by VMware. While some worry about added complexity from abstraction, Turner dismissed concerns about virtualization overhead, pointing to benchmarks showing 98.3% of bare metal performance for virtualized AI workloads. He emphasized that AI is driving nearly half of Kubernetes deployments, prompting VMware’s partnership with Nvidia to support GPU virtualization. Turner also highlighted VMware's open source leadership, contributing to major projects and ensuring Kubernetes remains cloud-independent and standards-based. VMware aims to simplify Kubernetes and AI workload management while staying committed to the open ecosystem. Learn more from The New Stack about the latest insights with VMware Has VMware Finally Caught Up With Kubernetes? VMware’s Golden Path Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


Prequel is launching a new developer-focused service aimed at democratizing software error detection—an area typically dominated by large cloud providers. Co-founded by Lyndon Brown and Tony Meehan, both former NSA engineers, Prequel introduces a community-driven observability approach centered on Common Reliability Enumerations (CREs). CREs categorize recurring production issues, helping engineers detect, understand, and communicate problems without reinventing solutions or working in isolation. Their open-source tools, cre and prereq , allow teams to build and share detectors that catch bugs and anti-patterns in real time—without exposing sensitive data, thanks to edge processing using WebAssembly. The urgency behind Prequel’s mission stems from the rapid pace of AI-driven development, increased third-party code usage, and rising infrastructure costs. Traditional observability tools may surface symptoms, but Prequel aims to provide precise problem definitions and actionable insights. While observability giants like Datadog and Splunk dominate the market, Brown and Meehan argue that engineers still feel overwhelmed by data and underpowered in diagnostics—something they believe CREs can finally change. Learn more from The New Stack about the latest Observability insights Why Consolidating Observability Tools Is a Smart Move Building an Observability Culture: Getting Everyone Onboard Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


1 Arm’s Open Source Leader on Meeting the AI Challenge 18:21
18:21
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب18:21
At Arm, open source is the default approach, with proprietary software requiring justification, says Andrew Wafaa, fellow and senior director of software communities. Speaking at KubeCon + CloudNativeCon Europe, Wafaa emphasized Arm’s decade-long commitment to open source, highlighting its investment in key projects like the Linux kernel, GCC, and LLVM. This investment is strategic, ensuring strong support for Arm’s architecture through vital tools and system software. Wafaa also challenged the hype around GPUs in AI, asserting that CPUs—especially those enhanced with Arm’s Scalable Matrix Extension (SME2) and Scalable Vector Extension (SVE2)—are often more suitable for inference workloads. CPUs offer greater flexibility, and Arm’s innovations aim to reduce dependency on expensive GPU fleets. On the AI framework front, Wafaa pointed to PyTorch as the emerging hub, likening its ecosystem-building potential to Kubernetes. As a PyTorch Foundation board member, he sees PyTorch becoming the central open source platform in AI development, with broad community and industry backing. Learn more from The New Stack about the latest insights about Arm: Edge Wars Heat Up as Arm Aims to Outflank Intel, Qualcomm Arm: See a Demo About Migrating a x86-Based App to ARM64 Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


1 Why Kubernetes Cost Optimization Keeps Failing 17:22
17:22
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب17:22
In today’s uncertain economy, businesses are tightening costs, including for Kubernetes (K8s) operations, which are notoriously difficult to optimize. Yodar Shafrir, co-founder and CEO of ScaleOps, explained at KubeCon + CloudNativeCon Europe that dynamic, cloud-native applications have constantly shifting loads, making resource allocation complex. Engineers must provision enough resources to handle spikes without overspending, but in large production clusters with thousands of applications, manual optimization often fails. This leads to 70–80% resource waste and performance issues. Developers typically prioritize application performance over operational cost, and AI workloads further strain resources. Existing optimization tools offer static recommendations that quickly become outdated due to the dynamic nature of workloads, risking downtime. Shafrir emphasized that real-time, fully automated solutions like ScaleOps' platform are crucial. By dynamically adjusting container-level resources based on real-time consumption and business metrics, ScaleOps improves application reliability and eliminates waste. Their approach shifts Kubernetes management from static to dynamic resource allocation. Listen to the full episode for more insights and ScaleOps' roadmap. Learn more from The New Stack about the latest in scaling Kubernetes and managing operational costs: ScaleOps Adds Predictive Horizontal Scaling, Smart Placement ScaleOps Dynamically Right-Sizes Containers at Runtime Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 How Heroku Is ‘Re-Platforming’ Its Platform 18:01
18:01
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب18:01
Heroku has been undergoing a major transformation, re-platforming its entire Platform as a Service (PaaS) offering over the past year and a half. This ambitious effort, dubbed “Fir,” will soon reach general availability. According to Betty Junod, CMO and SVP at Heroku (owned by Salesforce), the overhaul includes a shift to Kubernetes and OCI standards, reinforcing Heroku’s commitment to open source. The platform now features Heroku Cloud Native Buildpacks, which let developers create container images without Dockerfiles. Originally built on Ruby on Rails and predating Docker and AWS, Heroku now supports eight programming languages. The company has also deepened its open source engagement by becoming a platinum member of the Cloud Native Computing Foundation (CNCF), contributing to projects like OpenTelemetry. Additionally, Heroku has open sourced its Twelve-Factor Apps methodology, inviting the community to help modernize it to address evolving needs such as secrets management and workload identity. This signals a broader effort to align Heroku’s future with the cloud native ecosystem. Learn more from The New Stack about Heroku's approach to Platform-as-a-Service: Return to PaaS: Building the Platform of Our Dreams Heroku Moved Twelve-Factor Apps to Open Source. What’s Next? How Heroku Is Positioned To Help Ops Engineers in the GenAI Era Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 Container Security and AI: A Talk with Chainguard's Founder 20:51
20:51
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب20:51
In this episode of The New Stack Makers , recorded at KubeCon + CloudNativeCon Europe, Alex Williams speaks with Ville Aikas, Chainguard founder and early Kubernetes contributor. They reflect on the evolution of container security, particularly how early assumptions—like trusting that users would validate container images—proved problematic. Aikas recalls the lack of secure defaults, such as allowing containers to run as root, stemming from the team’s internal Google perspective, which led to unrealistic expectations about external security practices. The Kubernetes community has since made strides with governance policies, secure defaults, and standard practices like avoiding long-lived credentials and supporting federated authentication. Aikas founded Chainguard to address the need for trusted, minimal, and verifiable container images—offering zero-CVE images, transparent toolchains, and full SBOMs. This security-first philosophy now extends to virtual machines and Java dependencies via Chainguard Libraries. The discussion also highlights the rising concerns around AI/ML security in Kubernetes, including complex model dependencies, GPU integrations, and potential attack vectors—prompting Chainguard’s move toward locked-down AI images. Learn more from The New Stack about Container Security and AI Chainguard Takes Aim At Vulnerable Java Libraries Clean Container Images: A Supply Chain Security Revolution Revolutionizing Offensive Security: A New Era With Agentic AI Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 Kelsey Hightower, AWS's Eswar Bala on Open Source's Evolution 37:52
37:52
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب37:52
In a candid episode of The New Stack Makers , Kubernetes pioneer Kelsey Hightower and AWS’s Eswar Bala explored the evolving relationship between enterprise cloud providers and open source software at KubeCon+CloudNativeCon London. Hightower highlighted open source's origins as a grassroots movement challenging big vendors, and shared how it gave people—especially those without traditional tech credentials—a way into the industry. Recalling his own journey, Hightower emphasized that open source empowered individuals through contribution over credentials. Bala traced the early development of Kubernetes and his own transition from building container orchestration systems to launching AWS’s Elastic Kubernetes Service (EKS), driven by growing customer demand. The discussion, recorded at KubeCon + CloudNativeCon Europe, touched on how open source is now central to enterprise cloud strategies, with AWS not only contributing but creating projects like Karpenter, Cedar, and Kro. Both speakers agreed that open source's collaborative model—where companies build in public and customers drive innovation—has reshaped the cloud ecosystem, turning former tensions into partnerships built on community-driven progress. Learn more from The New Stack about the relationship between enterprise cloud providers and open source software: The Metamorphosis of Open Source: An Industry in Transition The Complex Relationship Between Cloud Providers and Open Source How Open Source Has Turned the Tables on Enterprise Software Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 The Kro Project: Giving Kubernetes Users What They Want 21:51
21:51
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب21:51
In a rare show of collaboration, Google, Amazon, and Microsoft have joined forces on Kro — the Kubernetes Resource Orchestrator — an open source, cloud-agnostic tool designed to simplify custom resource orchestration in Kubernetes. Announced during KubeCon + CloudNativeCon Europe, Kro was born from strong customer demand for a Kubernetes-native solution that works across cloud providers without vendor lock-in. Nic Slattery, Product Manager at Google and Jesse Butler, Principal Product Manager, AWS shared with The New Stack that unlike many enterprise products, Kro didn’t stem from top-down strategy but from consistent customer "pull" experienced by all three companies. It aims to reduce complexity by allowing platform teams to offer simplified interfaces to developers, enabling resource requests without needing deep service-specific knowledge. Kro also represents a unique cross-company collaboration, driven by a shared mission and open source values. Though still in its alpha stage, the project has already attracted 57 contributors in just seven months. The team is now focused on refining core features and preparing for a production-ready release — all while maintaining a narrowly scoped, community-first approach. Learn more from The New Stack about KRO: One Mighty kro; One Giant Leap for Kubernetes Resource Orchestration Kubernetes Gets a New Resource Orchestrator in the Form of Kro Orchestrate Cloud Native Workloads With Kro and Kubernetes Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


1 OpenSearch: What’s Next for the Search and Analytics Suite? 20:10
20:10
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب20:10
OpenSearch has evolved significantly since its 2021 launch, recently reaching a major milestone with its move to the Linux Foundation. This shift from company-led to foundation-based governance has accelerated community contributions and enterprise adoption, as discussed by NetApp’s Amanda Katona in a New Stack Makers episode recorded at KubeCon + CloudNativeCon Europe. NetApp, an early adopter of OpenSearch following Elasticsearch’s licensing change, now offers managed services on the platform and contributes actively to its development. Katona emphasized how neutral governance under the Linux Foundation has lowered barriers to enterprise contribution, noting a 56% increase in downloads since the transition and growing interest from developers. OpenSearch 3.0, featuring a Lucene 10 upgrade, promises faster search capabilities—especially relevant as data volumes surge. NetApp’s ongoing investments include work on machine learning plugins and developer training resources. Katona sees the Linux Foundation’s involvement as key to OpenSearch’s long-term success, offering vendor-neutral governance and reassuring users seeking openness, performance, and scalability in data search and analytics. Learn more from The New Stack about OpenSearch: Report: OpenSearch Bests ElasticSearch at Vector Modeling AWS Transfers OpenSearch to the Linux Foundation OpenSearch: How the Project Went From Fork to Foundation Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 Kong’s AI Gateway Aims to Make Building with AI Easier 21:05
21:05
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب21:05
AI applications are evolving beyond chatbots into more complex and transformative solutions, according to Marco Palladino, CTO and co-founder of Kong. In a recent episode of The New Stack Makers, he discussed the rise of AI agents, which act as "virtual employees" to enhance organizational efficiency. For instance, AI can now function as a product manager for APIs—analyzing documentation, detecting inaccuracies, and making corrections. However, reliance on AI agents brings security risks, such as data leakage and governance challenges. Organizations need observability and safeguards, but developers often resist implementing these requirements manually. As GenAI adoption matures, teams seek ways to accelerate development without rebuilding security measures repeatedly. To address these challenges, Kong introduced AI Gateway, an open-source plugin for its API Gateway. AI Gateway supports multiple AI models across providers like AWS, Microsoft, and Google, offering developers a universal API to integrate AI securely and efficiently. It also features automated retrieval-augmented generation (RAG) pipelines to minimize hallucinations. Palladino emphasized the need for consistent security in AI infrastructure, ensuring developers can focus on innovation while leveraging built-in protections. Learn more from The New Stack about Kong’s AI Gateway Kong: New ‘AI-Infused’ Features for API Management, Dev Tools From Zero to a Terraform Provider for Kong in 120 Hours Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


1 What’s the Future of Platform Engineering? 26:44
26:44
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب26:44
Platform engineering was meant to ease the burdens of Devs and Ops by reducing cognitive load and repetitive tasks. However, building internal development platforms (IDPs) has proven challenging. Despite this, Gartner predicts that by 2026, 80% of software engineering organizations will have a platform team. In a recent New Stack Makers episode, Mallory Haigh of Humanitec and Nathen Harvey of Google discussed the current state and future of platform engineering. Haigh emphasized that many organizations rush to build IDPs without understanding why they need them, leading to ineffective implementations. She noted that platform engineering is 10% technical and 90% cultural change, requiring deep introspection and strategic planning. AI-driven automation, particularly agentic AI, is expected to shape platform engineering’s future. Haigh highlighted how AI can enhance platform orchestration and optimize GPU resource management. Harvey compared platform engineering to generative AI—both aim to reduce toil and improve efficiency. As AI adoption grows, platform teams must ensure their infrastructure supports these advancements. Learn more from The New Stack about platform engineering: Platform Engineering on the Brink: Breakthrough or Bust? Platform Engineers Must Have Strong Opinions The Missing Piece in Platform Engineering: Recognizing Producers Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 AI Agents are Dumb Robots, Calling LLMs 28:31
28:31
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب28:31
AI agents are set to transform software development, but software itself isn’t going anywhere—despite the dramatic predictions. On this episode of The New Stack Makers, Mark Hinkle, CEO and Founder of Peripety Labs, discusses how AI agents relate to serverless technologies, infrastructure-as-code (IaC), and configuration management. Hinkle envisions AI agents as “dumb robots” handling tasks like querying APIs and exchanging data, while the real intelligence remains in large language models (LLMs). These agents, likely implemented as serverless functions in Python or JavaScript, will automate software development processes dynamically. LLMs, leveraging vast amounts of open-source code, will enable AI agents to generate bespoke, task-specific tools on the fly—unlike traditional cloud tools from HashiCorp or configuration management tools like Chef and Puppet. As AI-generated tooling becomes more prevalent, managing and optimizing these agents will require strong observability and evaluation practices. According to Hinkle, this shift marks the future of software, where AI agents dynamically create, call, and manage tools for CI/CD, monitoring, and beyond. Check out the full episode for more insights. Learn more from The New Stack about emerging trends in AI agents: Lessons From Kubernetes and the Cloud Should Steer the AI Revolution AI Agents: Why Workflows Are the LLM Use Case to Watch Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


The transition from SaaS to Services as Software with AI agents is underway, necessitating new orchestration methods similar to Kubernetes for containers. AI agents will require resource allocation, workflow management, and scalable infrastructure as they evolve. Two key trends are driving this shift: Data Evolution – From spreadsheets to AI agents, data has progressed through relational databases, big data, predictive analytics, and generative AI. Computing Evolution – Starting from mainframes, the journey has moved through desktops, client servers, web/mobile, SaaS, and now agentic workflows. Janakiram MSV, an analyst, notes on this episode of The New Stack Makers that SaaS depends on data—without it, platforms like Salesforce and SAP lack value. As data becomes more actionable and compute more agentic, a new paradigm emerges: Services as Software. AI agents will automate tasks previously requiring human intervention, like emails and sales follow-ups. However, orchestrating them will be complex, akin to Kubernetes managing containers. Unlike deterministic containers, AI agents depend on dynamic, trained data, posing new enterprise challenges in memory management and infrastructure. Learn more from The New Stack about evolution to AI agents: How AI Agents Are Starting To Automate the Enterprise Can You Trust AI To Be Your Data Analyst? Agentic AI is the New Web App, and Your AI Strategy Must Evolve Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 How Generative AI Is Reshaping the SDLC 21:42
21:42
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب21:42
Amazon Q Developer is streamlining the software development lifecycle by integrating AI-powered tools into AWS. In an interview at AWS in Seattle, Srini Iragavarapu, director of generative AI Applications and Developer Experiences at AWS, discussed how Amazon Q Developer enhances the developer experience. Initially focused on inline code completions, Amazon Q Developer evolved by incorporating generative AI models like Amazon Nova and Anthropic models, improving recommendations and accelerating development. British Telecom reported a 37% acceptance rate for AI-generated code. Beyond code completion, Amazon Q Developer enables developers to interact with Q for code reviews, test generation, and migrations. AWS also developed agentic frameworks to automate undifferentiated tasks, such as upgrading Java versions. Iragavarapu noted that internally, AWS used Q Developer to migrate 30,000 production applications, saving $260 million annually. The platform offers code generation, testing suites, RAG capabilities, and access to AWS custom chips, further flattening the SDLC by automating routine work. Listen to The New Stack Makers for the full discussion. Learn more from The New Stack about Amazon Q Developer: Amazon Q Developer Now Handles Your Entire Code Pipeline Amazon Q Apps: AI-Powered Development for All Amazon Revamps Developer AI With Code Conversion, Security Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


1 OAuth Works for AI Agents but Scaling is Another Question 25:36
25:36
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب25:36
Maya Kaczorowski noticed that AI identity and AI agent identity concerns were emerging from outside the security industry, rather than from CISOs and security leaders. She concluded that OAuth, the open standard for authentication, already serves the purpose of granting access without exposing passwords. Kaczorowski, a respected technologist and founder of Oblique, a startup focused on self-serve access controls, recently wrote about OAuth and AI agents and shared her insights on this episode of The New Stack Makers. She noted that developers see AI agents as extensions of themselves, granting them limited access to data and capabilities—precisely what OAuth is designed to handle. The challenges with AI agent identity are vast, involving different approaches to authentication, such as those explored by companies like AuthZed. While existing authorization models like RBAC or ABAC may still apply, the real challenge lies in scale. The exponential growth of AI-related entities—from users to LLMs—could mean even small organizations manage hundreds of thousands of agents. Future solutions must accommodate this massive scale efficiently. For the full discussion, check out The New Stack Makers interview with Kaczorowski. Learn more from The New Stack about OAuth requirements for AI Agents: OAuth 2.0: A Standard in Name Only? AI Agents Are Redefining the Future of Identity and Access Management Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
مرحبًا بك في مشغل أف ام!
يقوم برنامج مشغل أف أم بمسح الويب للحصول على بودكاست عالية الجودة لتستمتع بها الآن. إنه أفضل تطبيق بودكاست ويعمل على أجهزة اندرويد والأيفون والويب. قم بالتسجيل لمزامنة الاشتراكات عبر الأجهزة.