Following user feedback is a Product Management virtue. Is there an actual way to implement it, between all the noise, bugs, and stakeholder requests? Well… Most teams claim they are customer-driven. Yet the moment you open Zendesk, App Store reviews, survey results, and Slack threads, you instantly remember why everyone quietly avoids this work. Feedback is everywhere, contradictory, emotional, duplicated, and nearly impossible to turn into decisions. It is chaos disguised as “insights.” This is why the new Amplitude AI Feedback release caught my attention and made it all the easier to decide to partner with them on this update. It successfully connects what users say with what they actually do, in one workflow. No extra tools. No extra tabs. You see their words, frustrations, and praise. You see their behavior. And AI transforms it into ranked themes, rising trends, top requests, and complaints. Noise turns into clarity. Opinions turn into patterns. Patterns turn into action. And because it is native inside Amplitude, it kills the biggest problem in feedback work: Fragmentation. Everything flows into analytics, session replay, and cohorts, creating a full loop from insight to fix. You can trace why an issue matters, how many users care, how it impacts behavior, and which actions you should take. Finally, a single source of truth for PMs, UX, CX, and marketing. I’m also genuinely impressed with the supported sources of feedback: App Store, Google Play, Zendesk, Intercom, Freshdesk, Salesforce Service, Gong, Trustpilot, G2, Reddit, Discord, and X. Slack arrives in Q1, and there will be more! If you ever felt overwhelmed by feedback, this is one of the first attempts I have seen that genuinely solves the operational pain, not just the reporting part. It launches… Today! Take a look: https://lnkd.in/dAJKeTez What was the most successful update you know that came from the product’s users? Let me know in the comments. #productmanagement #productmanager #userfeedback
UX Design For Customer Support Tools
Explore top LinkedIn content from expert professionals.
-
-
Clinicians don’t want more data. They want fewer decisions. HealthTech keeps confusing complexity with sophistication. We assume that because clinicians are smart, they want more dashboards. More alerts. More choices. In truth, they want something no algorithm can measure: Cognitive relief. Imagine you’re a pilot. Mid-flight, you’re shown 17 new dials. Flashing red. Each says something important. Now make a life-or-death decision. Fast. Would you say thank you? That’s what most clinical decision support looks like in HealthTech today. And it’s killing trust faster than bad data ever could. Why? Because information isn’t value. Clarity is. The problem, IMO, isn’t the number of alerts. It’s the hidden cost of each micro-decision. Every time we ask a clinician to interpret another data stream, we’re not helping them, we’re taxing them. It’s not death by data. It’s death by 1,000 cognitive cuts. We’ve forgotten the difference between data and decision. Between information and insight. Between noise and relevance. And worst of all? We often design for what looks impressive - not what actually works on a ward round. The best HealthTech doesn’t make clinicians feel smarter. It makes them feel safer. Not “empowered.” Not “augmented.” Just calm. Just clear. That’s the gold standard now isn’t it? Tools that remove thinking, not add to it. If you’re building in HealthTech, Don’t ask: “What more can we show?” Ask: “What decisions can we take away?” That’s where trust is built. That’s where burnout is reduced. Build for fewer decisions. What would you add? P.S. Tools that reduce decisions are finally being valued. VCs are rewarding clarity, not complexity. If your AI product calms the chaos - you're building in the right direction - https://lnkd.in/euA2-8a2
-
𝗪𝗵𝗮𝘁'𝘀 𝘁𝗵𝗲 𝗽𝗿𝗶𝗰𝗲 𝗼𝗳 𝗖𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗢𝘃𝗲𝗿𝗹𝗼𝗮𝗱 𝗶𝗻 𝗣𝗿𝗼𝗰𝘂𝗿𝗲𝗺𝗲𝗻𝘁? Cognitive overload happens when the mental effort required to use a system or process exceeds the user’s capacity. In Procurement, this happens when tools are overly complex or poorly designed. 𝗧𝗵𝗲 𝗰𝗼𝗻𝘀𝗲𝗾𝘂𝗲𝗻𝗰𝗲𝘀 𝗼𝗳 𝗖𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗢𝘃𝗲𝗿𝗹𝗼𝗮𝗱 𝗮𝗿𝗲 𝘀𝗶𝗴𝗻𝗶𝗳𝗶𝗰𝗮𝗻𝘁 and range from a persistent operational inefficiency, more errors, low adoption of complex solutions and ultimately a risk for employee burnout. While some level of complexity is inevitable to support advanced functionality, the way tools and workflows are designed plays a crucial role for their usability, how effectively users can engage with them and the level of mental load they create. The Cognitive Load Theory (CLT), introduced by John Sweller in the 1980s, provides a framework for reducing mental strain by focusing on how users learn, process and retain information. The CLT identifies three types of cognitive load and offers insights into how Procurement Systems can be optimised for usability: 1️⃣ 𝗜𝗻𝘁𝗿𝗶𝗻𝘀𝗶𝗰 𝗟𝗼𝗮𝗱 which arises from the inherent complexity of the task or information. In Procurement, examples include multi-dimensional RFP scoring or the authoring of complex contracts and their SLAs. 𝗛𝗼𝘄 𝘁𝗼 𝗵𝗮𝗻𝗱𝗹𝗲 𝘁𝗵𝗶𝘀? Break down and simplify complex tasks into manageable steps using modular workflows, and provide pre-configured templates for common scenarios. 2️⃣ 𝗘𝘅𝘁𝗿𝗮𝗻𝗲𝗼𝘂𝘀 𝗟𝗼𝗮𝗱 stemming from poor system design, irrelevant information or inefficient processes. For example, clunky interfaces, unnecessary workflow steps or dashboards that hide insights under excessive detail. 𝗛𝗼𝘄 𝘁𝗼 𝘀𝗼𝗹𝘃𝗲 𝘁𝗵𝗶𝘀? Minimise Extraneous Load with a functional user interface design, using smart visualisations and streamlining workflows. 3️⃣ 𝗚𝗲𝗿𝗺𝗮𝗻𝗲 𝗟𝗼𝗮𝗱 resulting from the cognitive effort that directly supports learning and mastery. Examples include tooltips, clear guidance, and onboarding processes that make systems easier to navigate. 𝗛𝗼𝘄 𝘁𝗼 𝘀𝘂𝗽𝗽𝗼𝗿𝘁 𝘁𝗵𝗶𝘀? Enhance Germane Load with role-specific training, embedded tool tips & intuitive help features accelerating user learning. All three types can lead to a reduced capacity of employees to be able to operate effectively and potential negative consequences and mental stress. 𝗖𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗼𝘃𝗲𝗿𝗹𝗼𝗮𝗱 𝗰𝗼𝗺𝗲𝘀 𝗮𝘁 𝗮 𝗵𝗶𝗴𝗵 𝗽𝗿𝗶𝗰𝗲. 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀 𝘄𝗵𝗶𝗰𝗵 𝗰𝗼𝗻𝘀𝗶𝗱𝗲𝗿 𝗮 𝗵𝘂𝗺𝗮𝗻-𝗰𝗲𝗻𝘁𝗿𝗶𝗰 𝗱𝗲𝘀𝗶𝗴𝗻 and optimise their cognitive load levels by unveiling tasks step by-step, simplifying design and providing helpful learning features, 𝗵𝗮𝘃𝗲 𝗮 𝗵𝗶𝗴𝗵𝗲𝗿 𝗰𝗵𝗮𝗻𝗰𝗲 𝘁𝗼 𝘁𝘂𝗿𝗻 𝗳𝗿𝗼𝗺 𝗮 𝗵𝗲𝗮𝗱𝗮𝗰𝗵𝗲 𝘁𝗼 𝗮 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝘃𝗶𝘁𝘆 𝗯𝗼𝗼𝘀𝘁𝗲𝗿. ❓How do you think can solutions be humanised to reduce cognitive load. ❓What else helps to generate a good usability and user experience.
-
I often say that in an AI world metacognition is the master capability. This applies at all levels, especially in framing work, but also in interacting with AI. Research reveals specific approaches that yield better outcomes in working with GenAI. Very pleased that Microsoft Research has a significant focus on metacognition, with numerous papers on the topic. One of these, "The Metacognitive Demands and Opportunities of Generative AI", has some particularly instructive findings on both system design and usage: 🧩 Make the task explicit before you prompt. Most prompting interfaces expect you to state clear goals and break work into sub-tasks (e.g., “condense to two paragraphs,” “update the tone”). This metacognitive step is not optional—users who specify goals and decompose tasks gain better control over outputs. 🧠 Treat prompting as a metacognitive exercise. Effective use requires two abilities during iteration: calibrating your confidence (“is it my prompt, parameters, or model randomness?”) and flexibly switching strategies (retry, refine, or decompose further). 🛞 Choose the right interaction mode for control vs. ease. Giving explicit instructions is felt to be harder than inline edits, but it gives more control. Users often struggle at “getting started,” especially when many adjustable parameters are exposed. 🧪 Expect heavier evaluation work when AI generates long content. GenAI outputs (full emails, presentations, or code) shift effort from writing to judging, increasing cognitive load compared to simple auto-complete. People also tend to “eyeball” generated code, risking over-confidence in correctness. ⚡ Watch for fluency-driven overconfidence. Fast, fluent answers can inflate your confidence in both the output and your own evaluation, even when accuracy hasn’t improved. Higher felt confidence then reduces the effort you invest in checking, shortening thinking time and lowering willingness to revise. 🗺️ Use planning aids to improve prompts. Built-in planning support (goal setting + task decomposition) helps users craft better prompts; “prompt chaining” (multi-step sub-tasks) made participants “think through the task better” and target edits more precisely. 🧭🛠️ Reduce demand with explainability and customizability. Surface the right controls (e.g., temperature, shortlist size, output length) and adapt complexity to user state. This can improve self-awareness, confidence, and satisfaction. 🕹️ Support self-evaluation and self-management in the UI. Proactive, neutral nudges based on prior behavior (e.g., “you typically add 15 follow-ups after vague summaries”) can guide users to specify goals up front and reduce rework. ⚖️ Manage cognitive load while improving metacognition. Interventions (decomposition steps, reflections, explanations) add information to process, but studies show metacognitive support can improve outcomes without raising overall load; adapt or fade prompts as skills grow.
-
I used to think user research was easy. But then I switched to B2B. And oh boy... reality hit hard Back when I was working on a B2C product, I could run 10 user interviews in a day. Users would happily spend 45 minutes answering questions and testing new designs. I thought this was just regular product design. Turns out, I was riding a perfect wave of continuous discovery without even realizing it. Then I switched to B2B. And I admit it really felt scary at first. Users were just too busy to pick up my phone calls. It took 3 weeks to schedule 5 calls. Some users left a bad CSAT score with barely any comment. Damn. How can we build anything serious without ever talking to users? At that time, it really felt like an impossible task. And any way I tried to put it, there were just no efficient process to get those users on the phone. But then it hit me. What if the best discovery touch points weren’t designers or PMs at all? What if they were already happening… in sales calls, support chats, internal Slack threads? And we had this feedback scattered across tools, threads, and people. But no one was making sense of it. So we built a Feedback Management System. We plugged every feedback into a single source of truth directly in Notion: - Intercom conversations and Modjo calls with customers - Internal tickets from sales and support to discuss user pain points or feature requests - User feedback forms submitted on the platform All filtered and organized per team through Notion automations. Each designer spends 2 hours per week turning raw feedback into structured insights. Then each team reviews it together weekly, and it feeds product decisions and the roadmap. It’s simple. It’s scalable. And it changed everything. Product designers no longer design based on shaky assumptions or partial data. They're now the source of customer truth and alignment. In B2B, discovery doesn’t happen in a lab. It happens in the wild. You just need to know where to listen. #productdesign #uxdesign #userresearch
-
📌 Most Dashboards Fail Because of Bad UX Here’s the hard truth: You can have the cleanest data and the most advanced models… But if your dashboard is confusing, cluttered, or hard to navigate? Nobody will use it. BI isn’t just about data. It’s about experience. Dashboards are in fact UX products and should be treated that way. Great dashboards don’t just “show data.” They guide attention. Simplify decisions. Reduce friction. And just like any great product, they follow strong UX principles: → Clear layout → Logical flow → Minimal cognitive load → Built for the user, not the developer Let’s break down the 3 dashboard principles that make this possible 👇 1️⃣ 𝐃𝐞𝐬𝐢𝐠𝐧 𝐖𝐢𝐭𝐡 𝐭𝐡𝐞 𝐄𝐧𝐝 𝐔𝐬𝐞𝐫 𝐢𝐧 𝐌𝐢𝐧𝐝 This is where most dashboards go wrong. They’re built from a technical perspective and not a business one. Before touching a single chart, ask: → Who is this dashboard for? → What do they care about? → What action do they need to take from it? → What single question should this dashboard answer? If a dashboard tries to do everything for everyone, it ends up doing nothing for anyone. Treat your dashboard like a product. Build it around one user persona and one decision-making flow. 2️⃣ 𝐆𝐮𝐢𝐝𝐞 𝐭𝐡𝐞 𝐄𝐲𝐞 𝐰𝐢𝐭𝐡 𝐚 𝐂𝐥𝐞𝐚𝐫 𝐋𝐚𝐲𝐨𝐮𝐭 A great dashboard feels effortless to use. You don’t need to explain how to read it because it guides the user by design. Here’s how to do it: 1) Follow a natural reading pattern (top-left to bottom-right) 2) Use consistent spacing, alignment, and visual hierarchy 3) Group related charts and KPIs together 4) Avoid visual noise (limit to 5–7 key visuals per view) Think of your dashboard like a story It should unfold logically and lead the user to an insight without them having to look for it. 3️⃣ 𝐔𝐬𝐞 𝐭𝐡𝐞 𝐑𝐢𝐠𝐡𝐭 𝐕𝐢𝐬𝐮𝐚𝐥 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐉𝐨𝐛 Just because you can use a radar chart or sunburst doesn't mean you should. The best dashboards use simple, familiar visuals that communicate clearly. Here’s a cheat sheet I use: ⤷ To show progress or results → Use Scorecards or KPIs ⤷ To show trends over time → Line Charts or Area Charts ⤷ To compare parts of a whole → Pie Charts or Bar Charts ⤷ To analyze distributions → Histograms or Bell Curves ⤷ To show multivariate complexity → Heatmaps, Bubble Charts, or Pivot Tables Here what you need to remember is prioritizing clarity over creativity. Your dashboard isn’t a dribble a piece of art. It’s a decision tool. The bottom line is: Dashboards aren’t “data displays.” They’re interfaces for decision-making. And just like a product interface, design is everything. ☑ Good UX = Faster insights ☑ Good flow = Higher adoption ☑ Good visuals = Better decisions Build with purpose. Structure with clarity. Design for people. That’s how Business Intelligence becomes actual business impact. #DataStrategy #BusinessIntelligence #DataAnalytics
-
You can’t see cognitive overload. That’s why it’s ignored. Most teams treat accessibility as contrast ratios and alt text. But cognitive accessibility is wider than that, and less forgiving when you get it wrong. Here are 5 common cognitive disabilities And what designers can actually do. 1. ADHD Challenges: • Distractibility • Difficulty prioritizing • Overwhelm from dense layouts Design for: • Clear visual hierarchy • One primary action per section • Step-based flows Avoid: • Competing primary CTAs • Auto-rotating carousels • Notification overload 2. Dyslexia Challenges: • Slower decoding • Reading fatigue • Difficulty with dense text blocks Design for: • Plain language • Left-aligned text • Generous line height (1.5+ recommended) • Clear headings and chunking Avoid: • Justified text • Long paragraphs • Low-contrast body text 3. Autism Spectrum Challenges: • Sensory sensitivity • Cognitive overload • Distress from unexpected change Design for: • Predictable layouts • Explicit labels • Warnings before context shifts • User-controlled animation and motion Avoid: • Sudden modals • Autoplay video • Reduced motion off by default • Ambiguous copy like “Try it” or “Explore.” 4. Memory Impairment Challenges: • Forgetting steps • Losing context in multi-step flows Design for: • Persistent instructions • Progress indicators • Auto-save • Clear error recovery Avoid: • Clearing form data on error • Hiding previous answers • Long forms without sectioning 5. Anxiety Disorders Challenges: • Fear of mistakes • Stress from uncertainty • Decision paralysis Design for: • Reassuring microcopy • Undo functionality • Transparent consequences • Calm error messaging Avoid: • Countdown timers • Aggressive urgency language • Vague destructive actions Ask yourself: "Does this screen reduce thinking or increase it?" 👇🏽 Are we over-indexing on visual accessibility while ignoring cognitive overload? Drop your thoughts in the comments. ♻️ Share and save this for your team. --- ✉️ Subscribe to my newsletter for accessibility and design insights here: https://lnkd.in/gZpAzWSu --- Accessibility note: Content in the post is the same as the image attached (except for a few bullets omitted for easy scanability)
-
System prompts are getting outdated! Here's a counterintuitive lesson from building real-world Agents: Writing giant system prompts doesn't improve an Agent's performance; it often makes it worse. For example, you add a rule about refund policies. Then one about tone. Then another about when to escalate. Before long, you have a 2,000-word instruction manual. But here’s what we’ve learned: LLMs are extremely poor at handling this. Recent research also confirms what many of us experience. There’s a “Curse of Instructions.” The more rules you add to a prompt, the worse the model performs at following any single one. Here’s a better approach: contextually conditional guidelines. Instead of one giant prompt, break your instructions into modular pieces that only load into the LLM when relevant. ``` agent.create_guideline( condition="Customer asks about refunds", action="Check order status first to see if eligible", tools=[check_order_status], ) ``` Each guideline has two parts: - Condition: When does it get loaded? - Action: What should the agent do? The magic happens behind the scenes. When a query arrives, the system evaluates which guidelines are relevant to the current conversation state. Only those guidelines get loaded into the model’s context. This keeps the LLM’s cognitive load minimal because instead of juggling 50 rules, it focuses on just 3-4 that actually matter at that point. This results in dramatically better instruction-following. This approach is called Alignment Modeling. Structuring guidance contextually so agents stay focused, consistent, and compliant. Instead of waiting for an allegedly smaller model, what matters is having an architecture that respects how LLMs fundamentally work. This approach is actually implemented in Parlant - a recently trending open-source framework (13k+ stars). You can see the full implementation and try it yourself. But the core insight applies regardless of what tools you use: Be more methodical about context engineering and actually explaining what you expect the behavior to be in special cases you care about. Then agents can become truly focused and useful. I’ve shared the repo link in the first comment! ___ Share this with your network if you found this insightful ♻️ Follow me (Akshay Pachaar) for more insights and tutorials on AI and Machine Learning!
-
Power BI filter panels are often harder to use than they should be. Why does the one on the right work better? 𝐆𝐫𝐨𝐮𝐩𝐢𝐧𝐠 Slicers are broken into sections instead of one long block. Best if you can do it in a logical order or just space them out evenly. It makes the selection options easier to scan and less overwhelming. 𝐒𝐩𝐚𝐜𝐢𝐧𝐠 It’s hard to see which label belongs to which slicer when everything is cramped. Adjusted vertical gaps make it instantly clearer. 𝐇𝐮𝐦𝐚𝐧 𝐫𝐞𝐚𝐝𝐚𝐛𝐢𝐥𝐢𝐭𝐲 HasCreditCard and IsActiveMember are true/false fields with 0/1 values and the default field names. The viewers have to decode them in their head. Make them human-readable instead: • Credit Card Holders: Holding / Not Holding • Active Members: Active / Inactive Also why use dropdown just for two options? Show them directly. 𝐈𝐦𝐩𝐫𝐨𝐯𝐞 𝐮𝐬𝐚𝐛𝐢𝐥𝐢𝐭𝐲 The panel on the left closes only when you click the filter button again. There is zero indication for that. Add buttons to interact directly on the filter panel: • Two visible options to close the panel (top-right X and bottom Close button), so users can pick the shorter path. • As a third option you can also add a scrim (overlay behind the panel) with a close action. Users can close the panel clicking anywhere on the dashboard. This is often an expected behavior from other tools and apps. • A clear filter button to reset everything with one click. It makes the users’ life easier. 𝐓𝐲𝐩𝐨𝐠𝐫𝐚𝐩𝐡𝐲 • Replace pure black with softer grays to reduce visual contrast and give an easier feel. • Deemphasize slicer labels (lighter, smaller) so they don’t compete with the values. • Keep the selection values darker and larger so they stand out more. • The Clear button is red to indicate a “destructive” action. When you click it, you lose the filter selection. These are small changes but together they add up to a much better experience. If you want to build this in Power BI I shared the tutorial in the comments. 👇 #powerbi #dataviz #reportdesign #dashboarddesign #uidesign
-
Is AI Easing Clinician Workloads—or Adding More? Healthcare is rapidly embracing AI and Large Language Models (LLMs), hoping to reduce clinician workload. But early adoption reveals a more complicated reality: verifying AI outputs, dealing with errors, and struggling with workflow integration can actually increase clinicians’ cognitive load. Here are four key considerations: 1. Verification Overload - LLMs might produce coherent summaries, but “coherent” doesn’t always mean correct. Manually double-checking AI-generated notes or recommendations becomes an extra task on an already packed schedule. 2. Trust Erosion - Even a single AI-driven mistake—like the wrong dosage—can compromise patient safety. Errors that go unnoticed fracture clinicians’ trust and force them to re-verify every recommendation, negating AI’s efficiency. 3. Burnout Concerns - AI is often touted as a remedy for burnout. Yet if it’s poorly integrated or frequently incorrect, clinicians end up verifying and correcting even more, adding mental strain instead of relieving it. 4. Workflow Hurdles LLMs excel in flexible, open-ended tasks, but healthcare requires precision, consistency, and structured data. This mismatch can lead to patchwork solutions and unpredictable performance. Moving Forward - Tailored AI: Healthcare-specific designs that reduce “prompt engineering” and improve accuracy. - Transparent Validation: Clinicians need to understand how AI arrives at its conclusions. - Human-AI Collaboration: AI should empower, not replace, clinicians by streamlining verification. - Continuous Oversight: Monitoring, updates, and ongoing training are crucial for safe, effective adoption. If implemented thoughtfully, LLMs can move from novelty to genuine clinical asset. But we have to address these limitations head-on to ensure AI truly lightens the load. Want a deeper dive? Check out the full article where we explore each of these points in more detail—and share how we can build AI solutions that earn clinicians’ trust instead of eroding it.