Redis’ cover photo
Redis

Redis

Software Development

Mountain View, CA 284,971 followers

The world's fastest data platform.

About us

Redis is the world's fastest data platform. We provide cloud and on-prem solutions for caching, vector search, and more that seamlessly fit into any tech stack. With fast setup and fast support, we make it simple for digital customers to build, scale, and deploy the fast apps our world runs on.

Website
http://redis.io
Industry
Software Development
Company size
501-1,000 employees
Headquarters
Mountain View, CA
Type
Privately Held
Founded
2011
Specialties
In-Memory Database, NoSQL, Redis, Caching, Key Value Store, real-time transaction processing, Real-Time Analytics, Fast Data Ingest, Microservices, Vector Database, Vector Similarity Search, JSON Database, Search Engine, Real-Time Index and Query, Event Streaming, Time-Series Database, DBaaS, Serverless Database, Online Feature Store, and Active-Active Geo-Distribution

Locations

  • Primary

    700 E. El Camino Real

    Suite 250

    Mountain View, CA 94041, US

    Get directions
  • Bridge House, 4 Borough High Street

    London, England SE1 9QQ, GB

    Get directions
  • 94 Yigal Alon St.

    Alon 2 Tower, 32nd Floor

    Tel Aviv, Tel Aviv 6789140, IL

    Get directions
  • 316 West 12th Street, Suite 130

    Austin, Texas 78701, US

    Get directions

Employees at Redis

Updates

  • View organization page for Redis

    284,971 followers

    RedisVL is evolving from a convenience layer into a cross-language framework for building real-time, context-aware AI systems. Momentum is surging: nearly 500k downloads in October and rapid adoption of the LangGraph Checkpointer are making Redis the go-to context engine for AI agents. And with RedisVL 0.11.0, developers get major new capabilities: - Multi-vector queries for richer multimodal search - Enhanced text relevance + index customization - SVS-Vamana: a new high-performance, memory-efficient vector index - Native LangCache integration for faster, cheaper LLM apps RedisVL is becoming the AI-native toolkit teams rely on. Read more: https://lnkd.in/g7ZqQfvt

  • View organization page for Redis

    284,971 followers

    ICYMI: We’re excited to join Fierce Software’s ecosystem of leading technology partners. We’ll work together to power innovations in the federal sector and work smarter with real-time data. By combining our high-performance real-time data and AI platform with Fierce Software’s deep public sector expertise, we’re empowering teams to deliver faster insights, stronger systems, and better mission outcomes. Read more about our partnership: https://lnkd.in/gdujee83

    • No alternative text description for this image
  • View organization page for Redis

    284,971 followers

    Intelligent agents live or die by the quality of their context. Redis 8.4 introduces FT.HYBRID, a unified API that fuses full-text, vector, and metadata search into a single ranked result set using in-engine score fusion. This eliminates pre-filters, post-processing, and multi-step pipelines, and enables fast, accurate, context-aware retrieval. Hybrid retrieval matters because research shows it dramatically improves RAG performance—up to 49% fewer retrieval failures, 3–3.5X higher recall, and 11–15% better answer accuracy versus single-mode search. Read more: https://lnkd.in/ghZ75KJ7

    • No alternative text description for this image
  • Redis reposted this

    Excited to share that our paper “𝘄𝗮𝗟𝗟𝗠𝗮𝗿𝘁𝗖𝗮𝗰𝗵𝗲: 𝘈 𝘋𝘪𝘴𝘵𝘳𝘪𝘣𝘶𝘵𝘦𝘥, 𝘔𝘶𝘭𝘵𝘪-𝘵𝘦𝘯𝘢𝘯𝘵 𝘢𝘯𝘥 𝘌𝘯𝘩𝘢𝘯𝘤𝘦𝘥 𝘚𝘦𝘮𝘢𝘯𝘵𝘪𝘤 𝘊𝘢𝘤𝘩𝘪𝘯𝘨 𝘚𝘺𝘴𝘵𝘦𝘮 𝘧𝘰𝘳 𝘓𝘓𝘔𝘴” has been referenced in the new DeepLearning.AI × Redis course on Semantic Caching for AI Agents. It’s great to see semantic caching gain visibility as a key capability for building scalable AI systems. Even better when our work contributes to the conversation. Appreciation to my co-authors who made this research possible. Grateful to Kunal Banerjee, Anirban Chatterjee for their continued mentorship and support. Special thanks to Tyler Hutcherson for talking about our work in the course.😊 📄 Paper: https://lnkd.in/g5uU2Q2M  🔗 Course link: https://lnkd.in/gfWC9BJG Do check out the course! #IndustryResearch #ArtificialIntelligence #MachineLearning #GenerativeAI #Caching

    • No alternative text description for this image
  • Redis reposted this

    As we’re all heading back home from a fantastic #reinvent week we at Redis just want to say thank you for stopping by! Whether we saw you at the booth, Flight Club, the Hallucination Hub, or somewhere else it was a pleasure talking with you about what challenges you’re trying to solve. If we missed you then don’t be shy, we would love to partner with you on your upcoming 2026 initiatives. See you soon! https://redis.io/meeting/

  • View organization page for Redis

    284,971 followers

    Redis 🤝 Azure OpenAI 🟰 🤖 Podbot 🤖

    I have a confession to make. I've never written a byte of code for Microsoft Azure in my life. Well, that's technically not true because I just finished building something. But it was my first thing, so... spirit of the law and all that. What did I build? An AI chatbot that recommends podcasts based on your preferences. And it actually remembers what you like across conversations. I call it Podbot. You know, because it's a chatbot that talks about podcasts. It's a portmanteau. I'm not a branding expert. Anyhow, I wrote the application with TypeScript, Svelte 5, and Tailwind CSS. Svelte 5 is my go-to choice for building web apps—I really like the simplicity and performance. Tailwind can be a controversial choice, but I love it—I spend less time figuring out CSS and more time building features. All this code is deployed to Azure using Azure Static Web Apps and Azure Functions. For the AI bits I used Azure OpenAI + LiteLLM. If you haven't heard of LiteLLM, it's a really cool project that provides an OpenAI API interface that it maps to scads of different LLM providers. I just used it as an adapter to Azure OpenAI but it provides a lot of other useful features like monitoring and rate limiting. 🚅 LiteLLM: https://www.litellm.ai/ To manage Podbot's memory, I used Redis Agent Memory Server. This is a fairly new tool created by my much-smarter-than-me coworker at RedisAndrew Brookins. Agent Memory Server makes it super easy to manage the short- and long-term memories that LLMs need. It stores the conversation, summarizes it when it gets too long, and even extracts long-term memories automatically. 🧠 Agent Memory Server: https://lnkd.in/gjXC-bUs If you want to try some of these tools, or you want to kick the tires on Podbot, the code and instructions are on GitHub. You'll need to run it yourself. Podbot isn't deployed out on the Internet or anything. Get your own tokens. 🧑💻 https://lnkd.in/erGkYGTS If you have questions, comments, or criticism drop a comment. Or open an issue. Or send a PR. But please, be kind. This is LinkedIn and I know where you work. #Azure #Redis #AI #TypeScript #LLM #LiteLLM

    • No alternative text description for this image
  • Redis reposted this

    I have a confession to make. I've never written a byte of code for Microsoft Azure in my life. Well, that's technically not true because I just finished building something. But it was my first thing, so... spirit of the law and all that. What did I build? An AI chatbot that recommends podcasts based on your preferences. And it actually remembers what you like across conversations. I call it Podbot. You know, because it's a chatbot that talks about podcasts. It's a portmanteau. I'm not a branding expert. Anyhow, I wrote the application with TypeScript, Svelte 5, and Tailwind CSS. Svelte 5 is my go-to choice for building web apps—I really like the simplicity and performance. Tailwind can be a controversial choice, but I love it—I spend less time figuring out CSS and more time building features. All this code is deployed to Azure using Azure Static Web Apps and Azure Functions. For the AI bits I used Azure OpenAI + LiteLLM. If you haven't heard of LiteLLM, it's a really cool project that provides an OpenAI API interface that it maps to scads of different LLM providers. I just used it as an adapter to Azure OpenAI but it provides a lot of other useful features like monitoring and rate limiting. 🚅 LiteLLM: https://www.litellm.ai/ To manage Podbot's memory, I used Redis Agent Memory Server. This is a fairly new tool created by my much-smarter-than-me coworker at RedisAndrew Brookins. Agent Memory Server makes it super easy to manage the short- and long-term memories that LLMs need. It stores the conversation, summarizes it when it gets too long, and even extracts long-term memories automatically. 🧠 Agent Memory Server: https://lnkd.in/gjXC-bUs If you want to try some of these tools, or you want to kick the tires on Podbot, the code and instructions are on GitHub. You'll need to run it yourself. Podbot isn't deployed out on the Internet or anything. Get your own tokens. 🧑💻 https://lnkd.in/erGkYGTS If you have questions, comments, or criticism drop a comment. Or open an issue. Or send a PR. But please, be kind. This is LinkedIn and I know where you work. #Azure #Redis #AI #TypeScript #LLM #LiteLLM

    • No alternative text description for this image
  • View organization page for Redis

    284,971 followers

    What a week at AWS re:Invent 2025. From demos to deep dives, and our outpost at the Flight Club to our AI-themed happy hour, the Hallucination Hub, this week was definitely one for the books. We’re heading back home inspired by all the conversations, connections, and ideas that will keep shaping what’s next for Redis and our dev/builder community. Thank you to everyone who stopped by our booth and breakout sessions, our partners at Baseten, Tavily, and Amazon Web Services (AWS), and YOU. We can’t wait to see what you build next.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +3
  • View organization page for Redis

    284,971 followers

    What if data-querying agents could learn from every interaction and get smarter with each step? Our AI Research team put this idea to the test. Their feedback-driven architecture uses Redis to store trajectories, errors, and human feedback, turning every query into fuel for the next. ✅ The result: fewer retries, better grounding, and faster time-to-insight. Read more: https://lnkd.in/gxbJM7GY Cc: Srijith Rajamohan, Ph.D., Iliya Zhechev, Rado Ralev, Aditeya Baral, and Yash Mandilwar

Similar pages

Browse jobs

Funding

Redis 10 total rounds

Last Round

Secondary market

US$ 1.2M

See more info on crunchbase