Project Management

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    Founder of DeepLearning.AI; Managing General Partner of AI Fund; Exec Chairman of LandingAI

    2,324,575 followers

    Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]

  • View profile for Mary Tresa Gabriel
    Mary Tresa Gabriel Mary Tresa Gabriel is an Influencer

    Operations Coordinator at Weir | Documenting my career transition | Project Management Professional (PMP) | Work Abroad, Culture, Corporate life & Career Coach

    25,943 followers

    If I were starting a new PROJECT today and wanted to plan it with ZERO prior knowledge, I'd do this: Step 1: Define Your Objective • Clearly articulate what success looks like for the project. • Break down the high-level goal into smaller, manageable milestones. • Ensure the objective aligns with stakeholders' expectations to avoid misalignment later. Step 2: Build Your Plan Backwards and Leverage Historical Data Most people skip this step entirely. But this is a huge mistake—because you risk creating a plan that doesn’t align with deadlines, resources, or realistic expectations. Here’s how: • Start from the final deliverable and work backward to define the timeline. • Gather and review historical data or similar project examples to understand typical timelines and challenges. • Identify key dependencies and create a logical sequence for tasks. • Use project planning tools (like Gantt charts or Kanban boards) to visualize your plan. • Clearly define roles and responsibilities for each stage. Pro tip: Don’t forget to account for buffer time—projects rarely go 100% as planned. Step 3: Identify Risks and Create a Mitigation Plan This isn't easy. But if you can do this, you will get: • Clarity on potential roadblocks before they derail progress. • Stakeholder confidence in your ability to deliver. • A proactive, problem-solving mindset that boosts your credibility. Here's a quick way to do this: List out possible risks, evaluate their impact and likelihood, and create a plan to minimize or respond to them. Collaborate with your team to spot any blind spots. Don't skip this step. It took me months of trial and error (and some chaos) to crystallize these steps—hope this helps! 🚀

  • View profile for Jason Feng
    Jason Feng Jason Feng is an Influencer

    How-to guides for junior lawyers | Construction lawyer

    82,080 followers

    As a junior lawyer, I had to learn how to make it easy for supervisors to review my work. In case it helps, here's a step-by-step guide (with an example): 1️⃣Make it clear what the matter / document is and when input is needed. 2️⃣ Set out the context and approach to preparing the deliverable What needs to be reviewed, how was it prepared, and what’s the timeline? If you're attaching a document, include the live link to your file management platform (e.g. iManage or Sharepoint) as well as a static version. 3️⃣ Set out the next steps and your ask Make it clear what your supervisor needs to review. Set this out at the top of your email and proactively provide some recommendations. You can also follow up in person to make sure deadlines aren't missed. 4️⃣ Explain how the draft is marked up Make it easy to navigate with specific questions (either in the document or extracted in the email). If there are mark ups against a particular document / version, identify what that is. 5️⃣ Summarise your inputs Let them know what your draft reflects, and attach the relevant inputs so they can see everything in one place. This will give your supervisor confidence that you've captured everything, and make it easier for them to check your work. 6️⃣ Flag key aspects / assumptions If there are key assumptions / principles that have a big impact on how your draft is prepared, it's helpful to set them out in the email as a point of focus. Try to also set out the relevant clause / section / reference where possible. Is there anything else that you'd add? What else have you found helpful in making drafts easier to review, either as a junior lawyer or a supervisor? ------ Btw, if you're a junior lawyer looking for practical career advice - check out the free how-to guides on my website. You can also stay updated by sending a connection / follow. #legalprofession #lawyers #lawstudents #lawfirms

  • View profile for Pierre Le Manh
    Pierre Le Manh Pierre Le Manh is an Influencer

    President and CEO, PMI

    73,233 followers

    𝗧𝗼𝗱𝗮𝘆, 𝗣𝗠𝗜 𝗿𝗲𝗹𝗲𝗮𝘀𝗲𝘀 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗿𝗲𝘀𝘂𝗹𝘁𝘀 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗹𝗮𝗿𝗴𝗲𝘀𝘁 𝘀𝘁𝘂𝗱𝘆 𝘄𝗲’𝘃𝗲 𝗲𝘃𝗲𝗿 𝗰𝗼𝗻𝗱𝘂𝗰𝘁𝗲𝗱 - 𝗼𝗻 𝗮 𝘁𝗼𝗽𝗶𝗰 𝘁𝗵𝗮𝘁 𝗶𝘀 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝘁𝗼 𝗼𝘂𝗿 𝗽𝗿𝗼𝗳𝗲𝘀𝘀𝗶𝗼𝗻: 𝗣𝗿𝗼𝗷𝗲𝗰𝘁 𝗦𝘂𝗰𝗰𝗲𝘀𝘀. 📚 Read the report: https://lnkd.in/ekRmSj_h With this report, we are introducing a simple and scalable way to measure project success. A successful project is one that 𝗱𝗲𝗹𝗶𝘃𝗲𝗿𝘀 𝘃𝗮𝗹𝘂𝗲 𝘄𝗼𝗿𝘁𝗵 𝘁𝗵𝗲 𝗲𝗳𝗳𝗼𝗿𝘁 𝗮𝗻𝗱 𝗲𝘅𝗽𝗲𝗻𝘀𝗲, as perceived by key stakeholders. This clearly represents a shift for our profession, where beyond execution excellence we also feel accountable for doing anything in our power to improve the impact of our work and the value it generates at large. The implications for project professionals can be summarized in a framework for delivering 𝗠𝗢𝗥𝗘 success: 📚𝗠anage Perceptions For a project to be considered successful, the key stakeholders - customers, executives, or others - must perceive that the project’s outcomes provide sufficient value relative to the perceived investment of resources. 📚𝗢wn Project Success beyond Project Management Success Project professionals need to take any opportunity to move beyond literal mandates and feel accountable for improving outcomes while minimizing waste. 📚𝗥elentlessly Reassess Project Parameters Project professionals need to recognize the reality of inevitable and ongoing change, and continuously, in collaboration with stakeholders, reassess the perception of value and adjust plans. 📚𝗘xpand Perspective All projects have impacts beyond just the scope of the project itself. Even if we do not control all parameters, we must consider the broader picture and how the project fits within the larger business, goals, or objectives of the enterprise, and ultimately, our world. I believe executives will be excited about this work. It highlights the value project professionals can bring to their organizations and clarifies the vital role they play in driving transformation, delivering business results, and positively impacting the world. The shift in mindset will encourage project professionals to consider the perceptions of all stakeholders- not just the c-suite, but also customers and communities. To deliver more successful projects, business leaders must create environments that empower project professionals. They need to involve them in defining - and continuously reassessing and challenging - project value. Leverage their expertise. Invest in their work. And hold them accountable for contributing to maximize the perception of project value at all phases of the project - beyond excellence in execution. 📚 Please read the report, reflect on its findings, and share it broadly. And comment! Project Management Institute #ProjectSuccess #PMI #Leadership #ProjectManagementToday

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | AI Engineer | Generative AI | Agentic AI

    693,343 followers

    When working with multiple LLM providers, managing prompts, and handling complex data flows — structure isn't a luxury, it's a necessity. A well-organized architecture enables: → Collaboration between ML engineers and developers → Rapid experimentation with reproducibility → Consistent error handling, rate limiting, and logging → Clear separation of configuration (YAML) and logic (code) 𝗞𝗲𝘆 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 𝗧𝗵𝗮𝘁 𝗗𝗿𝗶𝘃𝗲 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 It’s not just about folder layout — it’s how components interact and scale together: → Centralized configuration using YAML files → A dedicated prompt engineering module with templates and few-shot examples → Properly sandboxed model clients with standardized interfaces → Utilities for caching, observability, and structured logging → Modular handlers for managing API calls and workflows This setup can save teams countless hours in debugging, onboarding, and scaling real-world GenAI systems — whether you're building RAG pipelines, fine-tuning models, or developing agent-based architectures. → What’s your go-to project structure when working with LLMs or Generative AI systems? Let’s share ideas and learn from each other.

  • View profile for Severin Hacker

    Duolingo CTO & cofounder

    43,645 followers

    Should you try Google’s famous “20% time” experiment to encourage innovation? We tried this at Duolingo years ago. It didn’t work. It wasn’t enough time for people to start meaningful projects, and very few people took advantage of it because the framework was pretty vague. I knew there had to be other ways to drive innovation at the company. So, here are 3 other initiatives we’ve tried, what we’ve learned from each, and what we're going to try next. 💡 Innovation Awards: Annual recognition for those who move the needle with boundary-pushing projects. The upside: These awards make our commitment to innovation clear, and offer a well-deserved incentive to those who have done remarkable work. The downside: It’s given to individuals, but we want to incentivize team work. What’s more, it’s not necessarily a framework for coming up with the next big thing. 💻 Hackathon: This is a good framework, and lots of companies do it. Everyone (not just engineers) can take two days to collaborate on and present anything that excites them, as long as it advances our mission or addresses a key business need. The upside: Some of our biggest features grew out of hackathon projects, from the Duolingo English Test (born at our first hackathon in 2013) to our avatar builder. The downside: Other than the time/resource constraint, projects rarely align with our current priorities. The ones that take off hit the elusive combo of right time + a problem that no other team could tackle. 💥 Special Projects: Knowing that ideal equation, we started a new program for fostering innovation, playfully dubbed DARPA (Duolingo Advanced Research Project Agency). The idea: anyone can pitch an idea at any time. If they get consensus on it and if it’s not in the purview of another team, a cross-functional group is formed to bring the project to fruition. The most creative work tends to happen when a problem is not in the clear purview of a particular team; this program creates a path for bringing these kinds of interdisciplinary ideas to life. Our Duo and Lily mascot suits (featured often on our social accounts) came from this, as did our Duo plushie and the merch store. (And if this photo doesn't show why we needed to innovate for new suits, I don't know what will!) The biggest challenge: figuring out how to transition ownership of a successful project after the strike team’s work is done. 👀 What’s next? We’re working on a program that proactively identifies big picture, unassigned problems that we haven’t figured out yet and then incentivizes people to create proposals for solving them. How that will work is still to be determined, but we know there is a lot of fertile ground for it to take root. How does your company create an environment of creativity that encourages true innovation? I'm interested to hear what's worked for you, so please feel free to share in the comments! #duolingo #innovation #hackathon #creativity #bigideas

  • View profile for Aakash Gupta
    Aakash Gupta Aakash Gupta is an Influencer

    AI + Product Management 🚀 | Helping you land your next job + succeed in your career

    292,078 followers

    It’s easy as a PM to only focus on the upside. But you'll notice: more experienced PMs actually spend more time on the downside. The reason is simple: the more time you’ve spent in Product Management, the more times you’ve been burned. The team releases “the” feature that was supposed to change everything for the product - and everything remains the same. When you reach this stage, product management becomes less about figuring out what new feature could deliver great value, and more about de-risking the choices you have made to deliver the needed impact. -- To do this systematically, I recommend considering Marty Cagan's classical 4 Risks. 𝟭. 𝗩𝗮𝗹𝘂𝗲 𝗥𝗶𝘀𝗸: 𝗧𝗵𝗲 𝗦𝗼𝘂𝗹 𝗼𝗳 𝘁𝗵𝗲 𝗣𝗿𝗼𝗱𝘂𝗰𝘁 Remember Juicero? They built a $400 Wi-Fi-enabled juicer, only to discover that their value proposition wasn’t compelling. Customers could just as easily squeeze the juice packs with their hands. A hard lesson in value risk. Value Risk asks whether customers care enough to open their wallets or devote their time. It’s the soul of your product. If you can’t be match how much they value their money or time, you’re toast. 𝟮. 𝗨𝘀𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗥𝗶𝘀𝗸: 𝗧𝗵𝗲 𝗨𝘀𝗲𝗿’𝘀 𝗟𝗲𝗻𝘀 Usability Risk isn't about if customers find value; it's about whether they can even get to that value. Can they navigate your product without wanting to throw their device out the window? Google Glass failed not because of value but usability. People didn’t want to wear something perceived as geeky, or that invaded privacy. Google Glass was a usability nightmare that never got its day in the sun. 𝟯. 𝗙𝗲𝗮𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆 𝗥𝗶𝘀𝗸: 𝗧𝗵𝗲 𝗔𝗿𝘁 𝗼𝗳 𝘁𝗵𝗲 𝗣𝗼𝘀𝘀𝗶𝗯𝗹𝗲 Feasibility Risk takes a different angle. It's not about the market or the user; it's about you. Can you and your team actually build what you’ve dreamed up? Theranos promised the moon but couldn't deliver. It claimed its technology could run extensive tests with a single drop of blood. The reality? It was scientifically impossible with their tech. They ignored feasibility risk and paid the price. 𝟰. 𝗩𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗥𝗶𝘀𝗸: 𝗧𝗵𝗲 𝗠𝘂𝗹𝘁𝗶-𝗗𝗶𝗺𝗲𝗻𝘀𝗶𝗼𝗻𝗮𝗹 𝗖𝗵𝗲𝘀𝘀 𝗚𝗮𝗺𝗲 (Business) Viability Risk is the "grandmaster" of risks. It asks: Does this product make sense within the broader context of your business? Take Kodak for example. They actually invented the digital camera but failed to adapt their business model to this disruptive technology. They held back due to fear it would cannibalize their film business. -- This systematic approach is the best way I have found to help de-risk big launches. How do you like to de-risk?

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer
    217,662 followers

    🧭 How To Manage Challenging Stakeholders and Influence Without Authority (free eBook, 95 pages) (https://lnkd.in/e6RY6dQB), a practical guide on how to deal with difficult stakeholders, manage difficult situations and stay true to your product strategy. From HiPPOs (Highest Paid Person’s Opinion) to ZEbRAs (Zero Evidence But Really Arrogant). By Dean Peters. Key takeaways: ✅ Study your stakeholders as you study your users. ✅ Attach your decisions to a goal, metric, or a problem. ✅ Have research data ready to challenge assumptions. ✅ Explain your tradeoffs, decisions, customer insights, data. 🚫 Don’t hide your designs: show unfinished work early. ✅ Explain the stage of your work and feedback you need. ✅ For one-off requests, paint and explain the full picture. ✅ Create a space for small experiments to limit damage. ✅ Build trust for your process with regular key updates. 🚫 Don’t invite feedback on design, but on your progress. As designers, we often sit on our work, waiting for the perfect moment to show the grand final outcome. Yet one of the most helpful strategies I’ve found is to give full, uncensored transparency about the work we are doing. The decision making, the frameworks we use to make these decisions, how we test, how we gather insights and make sense of them. Every couple of weeks I would either write down or record a short 3–4 mins video for stakeholders. I explain the progress we’ve made over the weeks, how we’ve made decisions and what our next steps will be. I show the design work done and abandoned, informed by research, refined by designers, reviewed by engineers, finetuned by marketing, approved by other colleagues. I explain the current stage of the design and what kind of feedback we would love to receive. I don’t really invite early feedback on the visual appearance or flows, but I actively invite agreement on the general direction of the project — for that stakeholders. I ask if there is anything that is quite important for them, but that we might have overlooked in the process. It’s much more difficult to argue against real data and a real established process that has led to positive outcomes over the years. In fact, stakeholders rarely know how we work. They rarely know the implications and costs of last-minute changes. They rarely see the intricate dependencies of “minor adjustments” late in the process. Explain how your work ties in with their goals. Focus on the problem you are trying to solve and the value it delivers for them — not the solution you are suggesting. Support your stakeholders, and you might be surprised how quickly you might get the support that you need. Useful resources: The Delicate Art of Interviewing Stakeholders, by Dan Brown 🤎 https://lnkd.in/dW5Wb8CK Good Questions For Stakeholders, by Lisa Nguyen, Cori Widen https://lnkd.in/eNtM5bUU UX Research to Win Over Stubborn Stakeholders, by Lizzy Burnam 🐞 https://lnkd.in/eW3Yyg5k [continues below ↓] #ux #design

  • View profile for Franck Debane

    CEO at Forward Partners

    11,274 followers

    Agile is just Waterfall in disguise. And it’s killing innovation The Agile Manifesto aimed to free development from processes and rigidity: ⦿ Individuals and interactions over processes and tools. ⦿ Working software over comprehensive documentation. ⦿ Responding to change over following a plan. ⦿ Customer collaboration over contract negotiation. But today, Agile has become what it tried to fix. Why? 🏰 Hierarchy > Autonomy Managers resisting self-organizing teams to preserve their position. 📊 Predictability > Experimentation Executives request predictable outcomes for shareholders. 🏆 Certification > Mindset Certifications and frameworks don't mean competence. The hidden truth is: Companies apply an Agile paint while preserving traditional hierarchies and dynamics. They perform Agile ceremonies while abandoning core principles 🚫 Stand-ups become interrogations 🚫 Jira boards replace conversations 🚫 MVPs require 50-page specs 🚫 Collaboration means assigning tasks, not solving problems together 🚫 Product Owners filter user feedback that conflicts with their roadmaps 🚫 Teams plan entire backlogs upfront and label it "Agile" 🚫 Focus is not on value delivery but on sprint completion The consequences? Agile lets hierarchies hide, consultants cash in, and teams chase sprint completions. Same same but different name. And innovation dies with a Jira ticket. Here is a questions for you: Is your company performing Agile or being agile?

  • View profile for Dawid Hanak
    Dawid Hanak Dawid Hanak is an Influencer

    I help PhDs & Professors get more visibility for their research without sacrificing research time. Professor in Decarbonization supporting businesses in technical, environmental and economic analysis (TEA & LCA).

    54,409 followers

    Don’t make these common mistakes in techno-economic assessments (and avoid misleading conclusions.) TEA is a powerful tool to assess the feasibility of emerging technologies. But even small mistakes can lead to misleading conclusions and poor decisions. Here are 5 key mistakes I’ve seen repeatedly—and how to fix them: 1. Overestimating Technology Performance Challenge: Assuming ideal or lab-scale performance when scaling up. Real-world conditions often bring inefficiencies. Fix: Use conservative assumptions, validate with experimental data, and conduct sensitivity analysis. 2. Ignoring Uncertainty Problem: Treating input values (e.g., costs, energy efficiency) as fixed leads to rigid, unreliable results. Fix: Perform sensitivity and scenario analyses to identify critical variables and explore best/worst cases. 3. Using Outdated or Poor-Quality Data The Problem: Relying on old data or inconsistent sources reduces the credibility of your TEA. Fix: Source data from updated literature, validated models, or credible industry benchmarks, and clearly document assumptions. If data is missing for new technologies, use proxy technologies and check uncertainties. 4. Oversimplifying Economic Analysis Problem: Focusing only on capital costs (CAPEX) while ignoring operating costs (OPEX), maintenance, or financing impacts. Or focusing on single metrics, like NPV. Fix: Include all cost components—CAPEX, OPEX, and life-cycle costs—and calculate key metrics like NPV, IRR, and payback period. 5. Neglecting Policy and Market Factors Problem: Ignoring factors like carbon pricing, subsidies, or fluctuating raw material costs can skew results. Fix: Integrate policy scenarios, market trends, and potential incentives to build a more realistic TEA. Techno-economic analysis is only as good as its assumptions and methods. Avoiding these mistakes will help you deliver insights that are credible, actionable, and valuable for decision-making. We’re going to discuss all these challenges with TEA and more during my workshop in Q1 2025. What challenges have you faced when conducting TEA? I’d love to hear your thoughts in the comments! #Research #ChemicalEngineering #Economics #Energy #PhD #Scientist #Professor

Explore categories