I've been meaning to write on this topic for a long time. It took me 3 months to put my thoughts together for this one. Anyway, I really don't remember what triggered me towards this specific topic. I, however, know why I'm writing this.
I started my career as a software developer in 2010 and gradually picked up design over the years. Which means I never personally carried the "designers should have a seat at the table" as identity or mission. I wasn't a vocationally trained designer fighting for legitimacy. I was someone who juggled both worlds in the initial years. I wrote code, shipped features, then started caring about whether those features actually made sense to use.
That position gave me something useful: I could watch this entire movement unfold from the outside. I started seeing the patterns without the emotional attachment. I watched design go from afterthought to buzzword to strategic function, and I watched what happened when it got there.
I'm writing this because I'm tired of watching talented designers mistake proximity for power. Because I've sat in too many rooms where design had a chair but no voice that mattered. Because I've seen entire teams optimise for the appearance of influence. However, the actual decisions that determine whether a product or product feature lives or dies happened elsewhere.
You see, someone needs to say it: we got what we asked for, and then we bombed it.
I believe the next generation of designers deserves better than the playbook we're handing them through social media and full design career cohort-based-courses. But first, we need to be honest about what went wrong.
For a long time, designers talked about wanting a seat at the table, and I heard this repeatedly across hiring conversations, conferences, internal discussions, and informal chats with younger designers who were trying to understand where their careers were headed. The belief was straightforward: if design could just get into the room where decisions were made, products would improve, users would be respected more consistently, and obviously bad ideas would stop shipping.
I never fully identified with this movement, largely because I didn’t start my career as a designer who had to fight for legitimacy. I started as a software developer, writing code, shipping features, watching things break in production, fixing them under pressure, and only later beginning to care deeply about whether those features actually made sense to use. That background put me in a position where I could observe the “seat at the table” narrative unfold without being emotionally invested in it, and what I saw over time was design getting exactly what it asked for, followed by confusion about what to do next.
Design did get invited earlier. Designers started sitting in roadmap meetings, planning sessions, and leadership reviews. Titles grew. Teams expanded. Executives began using phrases like “design-led” and “user-first” in all-hands meetings and investor decks. On the surface, the movement looked successful. But the outcomes didn’t follow in the way people expected. Retention didn’t meaningfully improve. Revenue didn’t stabilise just because design was present. Products didn’t suddenly become simpler or more trustworthy at scale. Craft didn’t deepen in proportion to the attention it was getting. Something wasn’t lining up, and it took me a long time to understand that the core misunderstanding wasn’t about access or respect at all.
The table was never about being heard. It was about being responsible for decisions that had real consequences.
When people talk about the table abstractly, it sounds almost ceremonial. In practice, the table is where allocation decisions get made: which features ship this quarter and which skip, which bets get more time and which get killed, which team absorbs the hit when metrics don’t move, which trade-offs are acceptable given the business reality of that moment. Sitting there isn’t about contributing opinions or feedback in the abstract. It’s about making calls that affect timelines, revenue, and people’s work, knowing that some of those calls will turn out to be wrong and that the consequences won’t be theoretical.
I’ve been in those meetings many times, where the room feels tense because the roadmap is already full, the quarter is dicey, the numbers are being watched closely, and someone eventually asks whether a particular feature is ready to ship or needs more time. This is usually described as the moment designers want to be part of, because it’s framed as influence. What I’ve often seen instead is hesitation. Language gets softened. Concerns get framed as suggestions. Feedback is labeled “non-blocking.” The decision drifts until someone else, usually product or business, takes it. It happens because taking a clear position in that moment means attaching your name to an outcome that might not be in your favour.
If you’re early in your design career, it’s important to understand this clearly. Getting a seat at the table was never about representing design or validating design as a discipline. It was always about whether you were prepared to own the consequences of the decisions made in that room, including the ones that don’t work out the way you hoped.
For a long time, designers confused visibility with power. Being invited into meetings felt like progress because design had historically been treated as a service function, brought in late to make things look good. Being included earlier felt like recognition. But recognition isn’t authority. Authority shows up when you’re expected to commit to a direction, explain why you believe in it, and accept responsibility if the outcome is worse than expected. Design, as a function, never fully accepted that deal.
Instead, influence got built around safer things: frameworks, rituals, critique formats, component libraries, design systems, and processes that looked impressive in decks and gave teams a sense of momentum. All of these are useful tools, and none of them are decision rights. Meanwhile, the choices that actually determined outcomes like pricing models, risk cut-off criteria, growth trade-offs, and launch timelines remained firmly with product, and marketing. Design could comment, sometimes strongly, but rarely had the authority to block. Over time, leadership learned something important from this pattern: design wasn’t going to attach itself to bets that could visibly fail. Once that became clear, design stopped being asked to take those bets.
A lot of teams tried to solve this gap by copying what famous companies did on the surface. They borrowed critique formats, design review rituals, sprint templates, and the whole vocabulary of “craft” and “taste” because those things were visible and easy to replicate, and they also gave the comforting feeling that design was becoming more mature. But at the companies people were copying, these rituals worked because the authority existed underneath them: design leaders had real control over timing, real influence on quality bars, and in many cases the ability to stop a launch if the experience would damage trust. In most other places, we inherited the ritual without the authority. Launch criteria didn’t change. Escalation paths didn’t change. Revenue commitments didn’t change. So the ritual became a performance layer. This performance layer was useful for alignment, sometimes even for quality but not strong enough to change outcomes when pressure showed up.
This problem compounded when design gained prestige faster than it earned responsibility. Teams grew quickly. Hiring bars dropped to keep up with demand. Social media likes rewarded aesthetics and presentation over outcomes and durability. A polished screenshot could create more career momentum than a quiet change that materially improved retention or reduced support load. The incentives were obvious. If you optimised for polish, taste, and process, you were celebrated. If you optimised for uncomfortable trade-offs that might slow growth or upset timelines, you were often invisible or seen as difficult. Predictably, many designers chose the path of least resistance.
When conversations shifted toward revenue, churn, margins, or launch pressure, design voices often faded. Designers waited for product managers to frame the business problem and then executed downstream. Strategic partners don’t wait to be handed problems; they notice issues early, propose bets, and push for them even when it’s uncomfortable. When design consistently behaves like an execution layer, organisations eventually treat it as one.
This got worse once design education turned into a volume business. A profession that used to require apprenticeship and scar tissue started being packaged like a tool you could learn quickly. Bootcamps and cohort programs promised that if you followed a process for a few weeks and produced a template portfolio, you were now a “product designer,” ready for the same rooms and responsibilities as people who had spent years shipping, failing, and learning. The visible skills spread fast be it Figma fluency, workshop language, research checklists, or polished presentations but the invisible skills didn’t, like knowing when a flow will collapse under real usage, sensing when a metric will get worse even if the UI looks cleaner, or having the courage to say “we shouldn’t ship this” when everyone is tired and the deadline is close.
The real gap wasn’t talent. It was consequences. Many designers entered the industry without ever being forced to sit in a room where their decision had a measurable cost, where the numbers came back ugly, and where they had to explain what they missed and what they’d change. Without that cycle, people naturally optimize for what looks good in a review and what wins approval in a meeting, because they haven’t had the experience of living with what happens after launch. Over time, that compounds into teams that are busy and articulate, but fragile under pressure, because the discipline never built the muscle of owning outcomes.
None of this happened because designers were incapable or uninterested. It happened because the system made avoiding risk easy. Companies hired designers faster than designers could realistically learn the job. Education flattened a craft that actually requires repeated exposure to failure, recovery, and long-term consequences. People learned how to present work convincingly long before they learned how to live with what happens after it ships. Social media amplified the wrong signals. Management promoted people before they had developed judgment. Teams expanded without strong quality bars. And because people moved jobs frequently, very few stayed long enough to see the second-order effects of their own decisions. Judgment doesn’t come from shipping version one. It comes from being around when version one fails, version two partially recovers, and the long tail of that decision shows up months later.
Power in an organisation is not free. If you want to delay a launch, you also own the revenue risk. If you want another iteration cycle, you own the competitive risk. If you argue to kill a feature, you own the downstream fallout. Engineers talk openly about what broke. Product managers review what didn’t work on their roadmaps. Design, as a discipline, hasn’t normalised this kind of ownership. When things fail, designers often experience it as something that happened to them, rather than something that happened because of a decision they advocated for. That difference is subtle, but it’s exactly the difference the table pays attention to.
Design can still grow into its seat, but that growth requires a shift that goes beyond language or positioning. It means understanding how the business actually works, not superficially, but well enough to argue in the same terms decisions are made in. It means staying in individual contributor roles long enough to develop judgment instead of rushing into management for status. It means hiring slowly, even when leadership is impatient. Most importantly, it means being willing to say, “I believe this will work, and I’m prepared to be publicly wrong if it doesn’t.”
If you’re starting out in the design field, understand this clearly. Getting a seat at the table was never the finish line. It was simply the entry point to a much harder job. The real work begins when you’re willing to carry the same weight everyone else in that room already carries. You can choose not to. That choice is understandable. Being wrong in public is uncomfortable, and responsibility is heavy. But if you don’t take it, someone else will. And they’ll be the ones deciding where the company goes, while you remain present but peripheral, invited but not relied on.
The table never existed to validate design. It existed to see who was willing to own the consequences.