Technical Writing Tips

Explore top LinkedIn content from expert professionals.

  • View profile for Atharva Joshi

    ML Kernel Performance Engineer @ AWS Annapurna Labs | Scaling LLM Pre-Training on Hardware Accelerators

    3,180 followers

    Are you a student or early-career professional struggling to get callbacks after submitting your resume? I’ve been there. During my first year of grad school, I blamed the job market when I didn’t get a single interview for nearly seven months. I started applying for Summer 2024 internships in August 2023, but didn’t receive my first callback until March 2024. Over time, I began refining my resume based on what the industry values and what it takes to stand out. That made all the difference. Here are some of the most important lessons I’ve learned: 1. Keep the Format Simple Avoid horizontal lines, text-heavy formatting, or excessive bolding. They clutter your resume and make it harder to read. Could you stick to one page? If you can’t explain your work clearly and concisely, you’re not ready to present it. 2. Don’t Just List Tools or Describe the Problem, Explain What You Did Many students focus too much on the business problem (“Built a dashboard for retail analytics”) and gloss over the engineering behind it. Even worse, some just list the tools used: “Used Python, Flask, and AWS to build a service that did X.” Instead, go deeper. What did your Flask service do, exactly? What challenges did you face? What decisions did you make? As engineers, we’re expected to show technical depth. If your resume can’t reflect that, you’ll struggle to stand out, especially for technical roles. 3. Be Realistic with Metrics Many resumes include lines like: “Improved model accuracy from 12% to 95%.” This kind of stat, usually influenced by generic advice from career centers or the internet, raises red flags. It often signals that the project wasn’t technically complex to begin with. Instead of inflating numbers, focus on what you improved, how you improved it, and why your work mattered. Strong technical framing > flashy percentages. 4. Clarity > Buzzwords You might write something like: “Leveraged CUDA for token-level optimization of transformer inference under real-time constraints.” It sounds cool, but what does it mean? This happens when people assume the reader will be as familiar with the project as they are. But if someone in your field has to guess what you did, you’ve already lost them. Don’t rely on buzzwords to do the talking; let clarity drive the message. 5. Your Resume Isn’t for You Your resume isn’t meant to impress you. It’s intended to communicate what you’ve done to people who don’t share your background. Most first-round reviewers aren’t ML engineers or CUDA developers. They often rely on keyword checklists and rubrics to decide which resumes move forward. The one thing that matters is: Can you clearly explain what you did and why it mattered? That’s it. Feel free to put your thoughts in the comments. Follow me for more advice!

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice | Founder: AHT Group - Informivity - Bondi Innovation

    34,041 followers

    Ignore all the trite prompting guides you see everywhere. This is the real deal. Lee Boonstra of Kaggle/Google lays out LLM configuration, prompting techniques, and best practices for the art of getting the most out of LLMs. These are fundamental skills and capabilities to create value today. Read the document for full details, here is a summary of the best practices: 🧪 Provide examples Examples help guide the model toward the intended structure, output format, and logic. 🎯 Design with simplicity Keep prompts straightforward and clear to reduce ambiguity and increase model reliability. 🔍 Be specific about the output Define exactly what kind of output you expect to improve precision and formatting. 📝 Use Instructions over Constraints Instead of forbidding outcomes, clearly instruct the model on what to do. 📏 Control the max token length Set token limits carefully to manage cost, performance, and avoid excessive output. 🔁 Use variables in prompts Incorporate variables to make prompt templates reusable and adaptable across tasks. 🧬 Experiment with input formats and writing styles Try different styles and formats to see what elicits the best responses for your use case. 🔀 For few-shot prompting with classification tasks, mix up the classes Vary class order in examples to avoid unintended bias from fixed ordering. 🔧 Adapt to model updates Continuously refine prompts as models evolve to maintain performance and relevance. 🧾 Experiment with output formats Play with JSON, markdown, or bullet lists to get the structure you need. 🤝 Experiment together with other prompt engineers Collaborate with peers to discover new strategies and improve prompting skills. 📚 Document the various prompt attempts Track and compare different prompt versions to refine your approach and learn from iterations.

  • View profile for Mel Loy SCMP

    Author | Speaker | Facilitator | Consultant (all things change and internal comms) | International Award Winner

    5,034 followers

    Keeping it simple is NOT ‘dumbing it down’. Keeping it simple is smart. Why? Because simple language is key to understanding. But if you’re used to using more complex, technical language, it can take a while to break the habit. So here’s four top tips to get you started: 1. Use everyday words - ditch the jargon and the corporate speak, and use familiar words people use every day. If you must use a technical term, make sure you explain it. 2. Keep sentences short - instead of one long sentence with three ideas, use three shorter sentences with one idea each. 3. Keep sentence structures simple - too many parentheses and dashes can be confusing and distracting. Don’t over-complicate your sentence structure. 4. Ask others - share your draft with others who don’t have the level of knowledge, context, or expertise you do, and ask them what doesn’t make sense. What else would you add to the list? [Image description: Pink tile with dark blue and white text that lists the four tips mentioned in this post, next to corresponding emojis.]

  • View profile for Rishab Kumar

    Staff DevRel at Twilio | GitHub Star | GDE | AWS Community Builder

    22,100 followers

    I recently went through the Prompt Engineering guide by Lee Boonstra from Google, and it offers valuable, practical insights. It confirms that getting the best results from LLMs is an iterative engineering process, not just casual conversation. Here are some key takeaways I found particularly impactful: 1. 𝐈𝐭'𝐬 𝐌𝐨𝐫𝐞 𝐓𝐡𝐚𝐧 𝐉𝐮𝐬𝐭 𝐖𝐨𝐫𝐝𝐬: Effective prompting goes beyond the text input. Configuring model parameters like Temperature (for creativity vs. determinism), Top-K/Top-P (for sampling control), and Output Length is crucial for tailoring the response to your specific needs. 2. 𝐆𝐮𝐢𝐝𝐚𝐧𝐜𝐞 𝐓𝐡𝐫𝐨𝐮𝐠𝐡 𝐄𝐱𝐚𝐦𝐩𝐥𝐞𝐬: Zero-shot, One-shot, and Few-shot prompting aren't just academic terms. Providing clear examples within your prompt is one of the most powerful ways to guide the LLM on desired output format, style, and structure, especially for tasks like classification or structured data generation (e.g., JSON). 3. 𝐔𝐧𝐥𝐨𝐜𝐤𝐢𝐧𝐠 𝐑𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠: Techniques like Chain of Thought (CoT) prompting – asking the model to 'think step-by-step' – significantly improve performance on complex tasks requiring reasoning (logic, math). Similarly, Step-back prompting (considering general principles first) enhances robustness. 4. 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 𝐚𝐧𝐝 𝐑𝐨𝐥𝐞𝐬 𝐌𝐚𝐭𝐭𝐞𝐫: Explicitly defining the System's overall purpose, providing relevant Context, or assigning a specific Role (e.g., "Act as a senior software architect reviewing this code") dramatically shapes the relevance and tone of the output. 5. 𝐏𝐨𝐰𝐞𝐫𝐟𝐮𝐥 𝐟𝐨𝐫 𝐂𝐨𝐝𝐞: The guide highlights practical applications for developers, including generating code snippets, explaining complex codebases, translating between languages, and even debugging/reviewing code – potential productivity boosters. 6. 𝐁𝐞𝐬𝐭 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐞𝐬 𝐚𝐫𝐞 𝐊𝐞𝐲: Specificity: Clearly define the desired output. Ambiguity leads to generic results. Instructions > Constraints: Focus on telling the model what to do rather than just what not to do. Iteration & Documentation: This is critical. Documenting prompt versions, configurations, and outcomes (using a structured template, like the one suggested) is essential for learning, debugging, and reproducing results. Understanding these techniques allows us to move beyond basic interactions and truly leverage the power of LLMs. What are your go-to prompt engineering techniques or best practices? Let's discuss! #PromptEngineering #AI #LLM

  • View profile for Anna J McDougall
    Anna J McDougall Anna J McDougall is an Influencer

    Engineering Leader of the Year 2024 🏆 CTO Craft 100 | Engineering Director @ Blinkist | TEDx Speaker | Author of “You Belong in Tech”

    10,498 followers

    I didn't turn up to my presentation for Deutsche Bank and AnitaB.org. I prepared the slides. I put a lot of thought into why each and every tip was important. Yet at the end of the day, I wasn't there... I was sick 🤢 Still, their loss is your gain, because I've turned everything from that presentation into a blog post for your convenience! 🥳 In it, I cover the core of presenting technical concepts and/or digitalisation proposals to non-technical stakeholders. SPOILER: it's more about listening and watching than it is about convincing! Here's the summary for those not wanting to read the whole thing: 🧐 Speak their language: ask about existing knowledge and establish what 'level' the stakeholder wants to speak at. No need to jump into architecture if they only want to know about personnel requirements. 😳 Cater to the lowest level in the room: Try to modify your explanations so that everyone gets it. Even those with more technical experience can learn from hearing a non-technical explanation. 🤩 Focus on collaboration and co-creation: Don't view it as a pitch, but rather as a chance to design a solution together. Be open to "teach don't preach" if they do look for more details. 👏 Be direct about resistance: Communicate options, and interpret resistance as an opportunity to put their minds at ease or to design a different solution together. 🤫 Practice active listening: 'Listening' sometimes happens with the eyes, not the ears. Look for moments when people tune out, change topics, or fidget more. You're losing your audience! 🚙 Use metaphors: Bridges, factories, post offices, architecture, and housing construction have all been metaphors I have used for explaining software engineering concepts to non-technical stakeholders. 🧙🏻♀️ Incorporate storytelling: Where possible, use real-world stories to illustrate processes, for example on how software engineering teams work using agile approaches, or versioning control. 😎 Be their resource: View these talks as the start of your relationship beyond this specific project. Position yourself to be their 'go to tech person' when they need something clarified. --- What do you think? #engineeringmanagement #technicalcommunication #strategiccommunication #pitching https://lnkd.in/eNQ5stUW

  • View profile for Clint Mehall

    Patent nerd; lawyer; Author of PHOSITB.com; Co-chair, NYIPLA Patent Law & Practice Committee

    4,272 followers

    Patent examiners at the USPTO will often reject patent applications using the wrong standard for inherent disclosure of a claimed feature. For example, our client recently received an inherency-centric rejection that asserted one of skill in the art "would expect" the prior art to have certain claimed properties. This is the incorrect standard for establishing inherency. Here are some of my favorite case law quotes for pushing back against a weak rejection based on inherent disclosure. Par Pharm. v. TWI Pharms., 773 F.3d 1186, 1195-96, 112 USPQ2d 1945, 1952 (Fed. Cir. 2014) ("A party must, therefore, meet a high standard in order to rely on inherency to establish the existence of a claim limitation in the prior art in an obviousness analysis - the limitation at issue necessarily must be present, or the natural result of the combination of elements explicitly disclosed by the prior art.") Agilent Techs., Inc. v. Affymetrix, Inc., 567 F.3d 1366, 1383 (Fed. Cir. 2009) (“The very essence of inherency is that one of ordinary skill in the art would recognize that a reference unavoidably teaches the property in question.” (Emphasis added)). As stated in MPEP 2112 “[i]n relying upon the theory of inherency, the examiner must provide a basis in fact and/or technical reasoning to reasonably support the determination that the allegedly inherent characteristic necessarily flows from the teachings of the applied prior art.” (quoting Ex parte Levy, 17 USPQ2d 1461, 1464 (Bd. Pat. App. & Inter. 1990)) (emphasis in original)).   “The inherent result must inevitably result from the disclosed steps; ‘[i]nherency . . . may not be established by probabilities or possibilities.’” In re Montgomery, 677 F.3d 1375, 1380 (Fed. Cir. 2012) (quoting In re Oelrich, 666 F.2d 578, 581 (CCPA 1981)). MPEP 2112: In re Robertson, 169 F.3d 743, 745 (Fed. Cir. 1999): “To establish inherency, the extrinsic evidence ‘must make clear that the missing descriptive matter is necessarily present in the thing described in the reference, and that it would be so recognized by persons of ordinary skill. Inherency, however, may not be established by probabilities or possibilities. The mere fact that a certain thing may result from a given set of circumstances is not sufficient.’ ” In re Swinehart, 439 F.2d 210, 213 (CCPA 1971), (“[T]he examiner must provide sufficient evidence or scientific reasoning to establish the reasonableness of the Examiner’s belief that the functional limitation is an inherent characteristic of the prior art.”) In re Rijckaert, 9 F.3d 1531, 1534, 28 USPQ2d 1955, 1957 (Fed. Cir. 1993) (reversed rejection because inherency was based on what would result due to optimization of conditions, not what was necessarily present in the prior art).  Anyone have any others? #patents #patentlaw #uspto

  • View profile for Devendra Kumar Sahu

    Senior Applied Scientist | xAmazon xMicrosoft

    9,200 followers

    Once in a while, I help people by giving feedback on their resumes. However, spending more than an hour on Zoom meeting is not necessarily very scalable for me. So I am sharing the main points here. 𝐑𝐞𝐬𝐮𝐦𝐞 𝐁𝐮𝐢𝐥𝐝𝐢𝐧𝐠 𝐆𝐮𝐢𝐝𝐞𝐥𝐢𝐧𝐞𝐬 𝐈 𝐮𝐬𝐮𝐚𝐥𝐥𝐲 𝐭𝐫𝐲 𝐭𝐨 𝐜𝐨𝐧𝐯𝐞𝐲: - 𝐀𝐝𝐨𝐩𝐭 𝐚𝐧 𝐎𝐛𝐣𝐞𝐜𝐭𝐢𝐯𝐞 𝐏𝐞𝐫𝐬𝐩𝐞𝐜𝐭𝐢𝐯𝐞: When reviewing your resume, try to eliminate personal biases. Aim to view it through the eyes of a third party, ideally a hiring manager. Your goal is to help them quickly assess your potential and reduce any perceived risks in hiring you. - 𝐅𝐨𝐜𝐮𝐬 𝐨𝐧 𝐒𝐮𝐛𝐬𝐭𝐚𝐧𝐜𝐞: Your resume should reflect real achievements and contributions. While clear and concise language is important, the content itself—what you’ve done—matters most. Strong but simple English is usually sufficient. - 𝐄𝐧𝐬𝐮𝐫𝐞 𝐓𝐞𝐜𝐡𝐧𝐢𝐜𝐚𝐥 𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲: Double-check your resume for any technical inaccuracies. These can undermine your credibility. - 𝐄𝐥𝐢𝐦𝐢𝐧𝐚𝐭𝐞 𝐓𝐲𝐩𝐨𝐬: Typos can leave a negative impression, so make sure your resume is free from them. - 𝐀𝐯𝐨𝐢𝐝 𝐕𝐚𝐠𝐮𝐞 𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞: Do not use weasel words. Steer clear of using ambiguous or filler words that don’t add value. Each word should be purposeful and impactful. - 𝐁𝐞 𝐃𝐞𝐥𝐢𝐛𝐞𝐫𝐚𝐭𝐞 𝐰𝐢𝐭𝐡 𝐄𝐯𝐞𝐫𝐲 𝐖𝐨𝐫𝐝: Evaluate and question the necessity of every word on your resume. If it doesn’t contribute to your narrative, consider removing it. - 𝐂𝐫𝐚𝐟𝐭 𝐂𝐥𝐞𝐚𝐫 𝐁𝐮𝐥𝐥𝐞𝐭 𝐏𝐨𝐢𝐧𝐭𝐬: Each bullet point should convey a clear message with a specific purpose. Avoid generic statements and ensure that each point highlights your unique contributions. - 𝐃𝐞𝐭𝐚𝐢𝐥 𝐘𝐨𝐮𝐫 𝐑𝐨𝐥𝐞 𝐚𝐧𝐝 𝐈𝐦𝐩𝐚𝐜𝐭: Clearly define your role in projects, specifying your responsibilities and the impact you had. If the project was long-term, you might also mention the techniques or methodologies you explored. - 𝐋𝐞𝐧𝐠𝐭𝐡 𝐅𝐥𝐞𝐱𝐢𝐛𝐢𝐥𝐢𝐭𝐲: While the one-page rule is common, don’t feel restricted by it. If you need more space to effectively showcase your skills and experiences, go beyond one page. However, make sure that every word on your resume justifies its place. - 𝐈𝐦𝐩𝐚𝐜𝐭 𝐰𝐢𝐭𝐡 𝐦𝐞𝐭𝐫𝐢𝐜𝐬: Add impact using relevant ML or business metrics vs baselines when possible. Interested in Machine Learning career? visit https://lnkd.in/gYDkxvEQ Follow our page: https://lnkd.in/gCfsrRTW Follow me on YouTube: https://lnkd.in/geqHDsGJ #MachineLearning #AppliedScience #TheCuriousCurator

  • View profile for Dhirendra Sinha

    SW Eng Manager at Google | Startup Advisor & Investor | Author | IIT

    48,524 followers

    3-step framework to write review-ready design docs every time as a software engineer. (Based on my learnings of over a decade in Engineering Management & reviewing 100s of design docs, this works like a charm every time) 1/ Start with a skeleton, write these: ◄ Metadata (Title, authors, status, date, reviewers, approvers) ◄ Context and background ◄ Problem statement ◄ Summary or tl;dr (Optional) ◄ Proposed solution details with tradeoffs and selection rationale ◄ Other alternatives considered ◄ Failure modes of the proposed solution ◄ Open Questions ◄ References (Optional) 2/ After the skeleton, fill in the content under these headings. -if there are sub-sections, add sub-headings. -provide examples and sample calculations. -use bullet points and lists wherever applicable -include architectural diagrams, graphs and tables. 3/ If the document is large, put a summary after the problem statement. Start with the skeleton, take it one step at a time, and before you know it, you are done! Remember, a good design doc: -helps understand design decisions and implementation details -helps in identifying potential issues and challenges early -gives a clear understanding of the architecture -serves as a reference doc during the project While you write and review, make sure your work follows these guidelines. I know writing detailed docs doesn’t come naturally when you’re focused on problem solving. But it’s an essential skill you have to learn to level up. just follow a simple procedure, practice and you’ll get the hang of it. Also While writing, just remember: - Think like your reader while writing - Take a break and read it with a fresh mind - Keep it as clear, simple, and brief as possible - Iterate over your design doc a few times and polish it - Get early feedback from 2-3 people and incorporate it in your design – P.S: System design interviews are one of the biggest deciding factors for mid-to-senior engineers. They test how well you can make trade-offs, balance constraints, and architect scalable solutions. I am taking a System Design Webinar next Monday, where I'll be covering: ✅ How to approach system design interviews strategically ✅ Common mistakes candidates make and how to avoid them ✅ The trade-offs that actually matter in real-world systems ✅ How to structure your answers to stand out Here's the link to register: https://lnkd.in/gM3TvVFn

  • View profile for Orlando René Ramírez Ozuna

    CEO & Founder Stockholm Precision Tools

    8,442 followers

    We’re Reviewing Patent Claims in Mining Tech — Vague Language Won’t Hold Forever Over the past few years, I’ve reviewed several patents in the mining and geology sector, and I’m seeing a trend that’s becoming increasingly concerning. Many of these patent applications don’t stand out for their technical substance, but rather for how they’re written. They use broad terms like “geological analysis system,” “sensor adaptable to various formations,” or “predictive structural modeling,” without clearly explaining how these outcomes are achieved. The result? Patents that seem to cover a lot of ground—but technically say very little. Legally, they may meet formal requirements. But in practice: • They create roadblocks for genuine innovation. • They cause legal uncertainty for companies doing real development. • They fuel unnecessary litigation in a sector that should focus on progress, not paperwork. In mining and geology—where real value lies in terrain knowledge, precision instrumentation, and rigorous interpretation—these vague or “hollow” patents distort the system. They don’t protect technical knowledge; they protect ambiguity. So what can we do? • Demand greater technical clarity in patent descriptions. • Limit functional claims that lack a defined structure or methodology. • Promote a culture of technically sound patents, not just legally correct ones. A poorly grounded patent can become a weapon to block those who are truly innovating—whether it’s in gyros, orientation tools, downhole sensors, or advanced geological software. In our industry, intellectual property should protect real technical value, not legal smoke and mirrors. Have you seen similar cases in your field? #MiningInnovation #Geology #Patents #IntellectualProperty #TechLaw #GeotechnicalTools #SensorDesign #InnovationInMining #LegalInTech

  • View profile for Amit Rawat

    CTO & Co-founder at Meetri Infotech | Custom Software & Mobile App Development | Product Development | Software Development Services | Startup Advisor

    10,540 followers

    People ask me how I explain technical concepts so easily to non-tech folks. My secret? I speak their language. But the truth is... There is no secret, just empathy and simplicity. To become a communication pro, you have to understand your audience. Here are 5 steps to get you started: - Break Down Complex Concepts: Use simple terms and relatable examples. - Avoid Technical Jargon: Swap out with analogies or metaphors. - Focus on Impact: Highlight how it benefits their goals. - Utilize Visual Aids: Diagrams make difficult ideas graspable. - Engage in Active Listening: Understand their goals and concerns first. Want to communicate effectively? Put these steps into practice daily. Remember, showing up and adapting is what separates good communicators from the rest. Start today. How do you make complex concepts easy to understand? Share your thoughts or experiences in the comments! #EffectiveCommunication #EmpathyInAction #SpeakTheirLanguage #ActiveListening #SimplifyComplexity

Explore categories