← WritingJune 2025

Design tools and cognitive load — why the software you use shapes what you think

Switching from Photoshop to Blender to KeyShot over the course of a project isn't just a workflow question. Each tool has a different model of what a design is.

The hardware concept work on this site moved through three different tools depending on the stage. Photoshop for initial mockups and reference composition. Blender for 3D modeling. KeyShot for rendering and material development. At the time this felt like a rational workflow — use the right tool for each phase. What I didn't fully appreciate until later was how much each switch changed not just the output but the thinking.

Each of those tools has a different underlying model of what a designed object is. Photoshop treats it as a flat composition of layers. Blender treats it as a collection of geometric objects in three-dimensional space. KeyShot treats it as a surface with optical properties under specific lighting conditions. Moving between them isn't just a change of interface. It's a change in what questions you can ask and what answers are possible.

Tools as cognitive scaffolding

The phrase "cognitive load" comes up most often in discussions of interface design — how many things a user has to hold in mind to accomplish a task. But it applies equally well to the tools designers use themselves. Every tool makes some operations easy and others effortful. The easy operations get done. The effortful ones get deferred, or skipped, or approximated.

Photoshop is very fast at compositing and image adjustment. It's slow and awkward at representing three-dimensional geometry. So designers using Photoshop do a lot of compositing and not much 3D reasoning — not because they've decided compositing matters more, but because the tool rewards it. Blender is the opposite. It's built around spatial geometry. Two-dimensional considerations — how does this look as a flat image, how does it read at thumbnail size — require a deliberate mental shift that the tool doesn't scaffold naturally.

KeyShot sits at a different point again. The core question it poses is optical: what happens when light hits this surface? Material choice, lighting setup, camera angle all matter enormously. It's a tool for answering questions about appearance, not about geometry — which is why the workflow runs Blender first and KeyShot second, not the other way around.

What gets lost in the switch

The practical problem with moving between tools is that some of what you know in one context doesn't survive the transition to another. A model that reads well in Blender's viewport — clean geometry, good proportions — sometimes looks flat or dead in KeyShot. A render that looks photorealistic in KeyShot can expose problems in the underlying model that weren't visible in the viewport. You've been optimizing for the wrong thing.

More subtly: each tool change resets your sense of what's possible. When you're working in Blender, you're aware of the modeling complexity of each decision. Adding a chamfer has a cost — more geometry to manage, more potential for shading errors. That awareness shapes what you try. When you move to KeyShot, that cost disappears from view. The model is done; you're evaluating surfaces now. The choices you deferred in Blender because they seemed expensive don't get reconsidered. They just stay deferred.

This is a version of a more general problem: the things a tool makes easy become the things you pay attention to, and the things it makes hard stop being considered even when they matter. The tool shapes the questions you ask, which shapes the solutions you find.

Donald Norman on affordances

Donald Norman's work on affordances describes how objects signal what actions they support. A door handle affords pulling. A flat plate affords pushing. Software has affordances too — menus, buttons, modifier keys — and those affordances steer behavior as reliably as physical ones. In software the affordances are often invisible until you know to look for them, and the learning curve for complex tools means many affordances never get discovered at all.

Blender has an enormous surface area. After using it for years, I still occasionally discover a shortcut or a modifier that would have saved significant time if I'd known it earlier. The tools I actually used regularly were the ones I'd learned in the first few months. The rest of the tool existed, technically, but it wasn't part of my mental model of what was possible — it might as well not have been there.

This has real implications for how tool expertise develops. You don't learn a tool and then use it. You learn a partial version of a tool, and that partial version shapes everything you build with it. The question is whether your partial version covers the parts that matter most for what you're trying to do.

The integration problem

The multi-tool workflow has another problem: nothing integrates. Photoshop doesn't know what's in Blender. Blender doesn't know what materials KeyShot will apply. KeyShot doesn't know what will happen to the image in Photoshop. At each handoff, information gets lost. A decision made in Blender about surface geometry gets re-evaluated in KeyShot without the context of why it was made that way. A lighting choice in KeyShot gets adjusted in Photoshop in ways that may not be consistent with the render's internal logic.

Professionals deal with this through discipline — documentation, file naming conventions, explicit notes about decisions and rationale. For solo work or small-scale projects, that infrastructure usually doesn't exist. The decisions live in your head, and they don't all survive the context switches.

Newer, more integrated tools have a real advantage here. When material, geometry, and lighting all live in the same environment and update together, decisions in one domain can be evaluated against their effect on the others. You lose the specialization of dedicated tools — Blender is a better modeler than most integrated solutions, KeyShot is a better renderer — but you gain coherence.

What this means for how you work

Be deliberate about when you switch tools rather than switching automatically at phase transitions. Some decisions should be made earlier than the workflow suggests — if a material choice will affect the model geometry, that decision shouldn't wait for KeyShot. Some decisions should be deferred — if you're not sure whether a detail will matter at final rendering scale, don't model it yet. The tool switch should follow the decision logic, not the other way around.

It also means paying attention to what each tool is hiding from you. Blender hides optical properties. KeyShot hides geometric complexity. Photoshop hides spatial relationships. Being in any one of them means not thinking about the things the others foreground. Deliberately stepping back from the tool to consider what it's making invisible is worth doing, especially at decision points.

The broader point — that the software you use shapes what you think, not just what you make — sounds obvious once you've experienced it but isn't obvious before. You can work through several projects in a given set of tools and experience the constraints without ever making them explicit. Making them explicit doesn't remove the constraints, but it means you can work around them on purpose rather than just accepting what the tool suggests.

The tools have gotten better since the work on this site was produced. The cognitive problem they pose hasn't. Any tool powerful enough to be useful is complex enough to have a model of the world, and that model shapes what you can see and what you can't. Understanding your tools means understanding their blind spots as much as their strengths.