Why helpdesk fundamentals are not enough in an industry where the entire technology stack can shift twice in a decade.
In the previous installments of this series, we have discussed efficiency during lean periods, understanding colleague workflows, security posture, operational resilience, AI guardrails, and the balance between innovation and stability. These topics apply broadly across industries. But there is one sector where every single one of those themes converges with an intensity that few other environments can match.
Video game development.
This is an industry that builds some of the most technically demanding products in the world, products that ship to millions of consumers simultaneously, yet often treats IT as an afterthought. The expectation in many studios is that IT exists to set up machines, reset passwords, and keep the Wi-Fi running. That expectation is not only outdated. It is actively harmful to production.
Not Just Another Tech Company
From the outside, video game studios look like any other technology company. Developers write code. Artists use workstations. Designers collaborate in shared tools. There are servers, networks, and cloud subscriptions.
The resemblance is surface-level.
Underneath, a game studio operates more like a film production crossed with a software engineering firm, running on timelines dictated by hardware manufacturers, platform holders, and a consumer market that has no patience for technical excuses. The production pipeline in a game studio is not a simple sequence of inputs and outputs. It is a dense, interconnected web of proprietary tools, middleware, engine builds, asset management systems, version control at massive scale, render farms, build distribution platforms, QA infrastructure, and live service backends. All of it moving in parallel. All of it interdependent.
An IT team that does not understand this pipeline is not supporting the studio. It is merely occupying space within it.
The Production Pipeline Is the Product
In most industries, IT supports the business process. In game development, IT is embedded within the product pipeline itself. Consider the chain of dependencies in a single day of production at a mid-to-large studio:
- Artists check in high-resolution assets through version control systems like Perforce, often pushing hundreds of gigabytes per day across distributed teams.
- Those assets are ingested by the game engine's build pipeline, which compiles, cooks, and packages them for target platforms.
- Build servers run continuous integration, producing testable builds for QA, design, and leadership review.
- QA teams deploy those builds to dev kits or test clouds, test hardware, and cloud-streaming environments.
- Multiplayer engineers rely on backend services, databases, and matchmaking infrastructure that must mirror production environments.
- Live operations teams monitor telemetry, player data, and service health in real time once the game ships.
If any single link in this chain breaks, the downstream effect is not a minor inconvenience. It is a production stoppage. A failed build server can halt an entire studio's daily progress. A misconfigured Perforce proxy can turn a ten-second file sync into a twenty-minute ordeal, multiplied across hundreds of users. A network bottleneck during asset ingestion can delay milestone submissions by days.
IT teams that view their role as separate from this pipeline will consistently be blindsided by the urgency and complexity of the problems they are asked to solve.
The Tectonic Shifts: How the Ground Moves Twice a Decade
Most industries experience technological change gradually. New software versions roll out. Cloud migrations happen over quarters or years. Hardware refreshes follow predictable depreciation cycles.
Video games do not operate on that cadence.
The games industry is tethered to hardware generations and platform evolution in a way that few other sectors are. When a new console generation launches, when a new graphics API becomes standard, when a major engine overhauls its rendering pipeline, the ripple effects are seismic. These shifts tend to arrive roughly every five to seven years, meaning that within a single decade, the foundational technology that a studio's entire workflow depends upon can change fundamentally. Twice.
This applies to mobile studios just as acutely, though the shifts take a different shape. Mobile game development is governed by the release cycles of Apple and Google. A single iOS update can deprecate rendering frameworks, change how push notifications behave, or alter memory management rules overnight. Android fragmentation introduces its own layer of complexity, where a game must perform acceptably across thousands of device configurations with wildly different chipsets, screen resolutions, and OS versions. When Apple transitioned from OpenGL ES to Metal, or when Google began enforcing 64-bit requirements and target API level mandates, studios that were not prepared lost weeks of production time scrambling to comply.
Consider what has shifted in the last decade alone:
- Console generations transitioned from the PS4/Xbox One era to the PS5/Xbox Series generation, requiring entirely new dev kit infrastructure, updated SDKs, and new build configurations.
- Mobile platforms moved through multiple seismic shifts: the deprecation of OpenGL ES in favour of Metal and Vulkan, mandatory 64-bit support, App Tracking Transparency upending analytics and monetisation pipelines, and increasingly aggressive background process restrictions that changed how live games maintain persistent connections.
- Game engines have moved from largely offline, packaged-build models to live-service, always-connected architectures requiring persistent backend infrastructure. For mobile studios, this shift was not optional. The free-to-play model that dominates mobile demands live operations from day one.
- Asset fidelity has increased exponentially, with photogrammetry, volumetric capture, and procedural generation placing massive new demands on storage, networking, and compute. Even mobile titles now ship with gigabytes of downloadable assets and require robust CDN strategies for over-the-air content delivery.
- Remote and distributed development, accelerated by the pandemic, has become a permanent fixture, requiring studios to rethink VPN architecture, remote workstation access, and globally distributed build systems.
- AI-assisted workflows for content generation, testing, and localisation have begun entering production pipelines, and studios are still determining what infrastructure, governance, and access controls these tools require.
Each of these shifts does not merely add to the existing workload. It restructures it. The IT team that was expertly managing on-premise Perforce servers in 2018 may now need to architect hybrid cloud-edge solutions for globally distributed teams. The mobile studio IT team that once maintained a handful of Mac Minis for iOS builds may now be managing a fleet of Apple Silicon build agents, Android signing infrastructure across multiple keystores, and automated submission pipelines to both app stores simultaneously.
This is not incremental change. It is periodic reinvention.
The Knowledge Gap: Helpdesk Fundamentals Are Not Enough
There is nothing wrong with strong helpdesk skills. Provisioning accounts, imaging machines, managing device inventories, and handling break-fix tickets are all necessary functions. They are the foundation. But in a game studio, they are only the foundation.
The challenge is that many studios, particularly smaller or mid-sized ones, hire IT staff with generalist backgrounds and expect them to operate in an environment that demands specialist knowledge. This is especially common in mobile studios, where the early-stage team is small enough that IT responsibilities are shared informally or handled by a single person wearing multiple hats. The result is a persistent knowledge gap that only becomes visible when it is already causing damage.
An IT administrator in a game studio needs to understand, at minimum:
- Version control at scale. Perforce is the industry standard for large binary assets in console and PC development. Mobile studios often start with Git or Git LFS, which works adequately for a single small project but begins to strain under the weight of multiple concurrent titles with large asset repositories. Understanding when and how to migrate, or how to manage branching strategies across several live projects sharing common frameworks, is critical knowledge that a generalist background does not provide.
- Build infrastructure. Whether it is Jenkins, TeamCity, Unreal's BuildGraph, Fastlane for mobile, or a custom system, IT must understand how builds are compiled, distributed, and validated. In mobile studios, build infrastructure carries additional complexity: iOS builds require macOS hardware, Android builds require managing SDK versions and NDK configurations, and both platforms demand code signing workflows that are fragile and poorly documented. A build engineer and an IT administrator in this industry share a significant overlap in responsibilities.
- Workstation specifications and GPU workflows. Artists, programmers, and technical artists have workstation requirements that are fundamentally different from a standard corporate environment. Mobile studios may underestimate this, assuming that because the target device is a phone, the development hardware can be modest. This is a misconception. Authoring content for mobile still demands capable workstations, and the testing matrix of physical devices that IT must procure, manage, charge, update, and distribute across QA teams is a logistical challenge unto itself.
- Network architecture for high-throughput environments. The volume of data moving through a studio's network including asset syncs, build distribution, render output, and telemetry streams, dwarfs typical enterprise traffic. Network design must account for this or production suffers.
- Platform-specific compliance and security. Console development requires adherence to strict NDAs and security requirements from platform holders like Sony, Microsoft, and Nintendo. Mobile development carries its own compliance burden: App Store review guidelines that change without warning, Google Play policy updates that can pull a live game from the store, privacy regulations that affect SDK integration, and the constant management of provisioning profiles, certificates, and entitlements that silently expire and break builds at the worst possible moment. IT must understand these requirements at a level that goes well beyond standard corporate policy.
- Confidentiality, as we see it. Beyond platform holders, studios also manage NDA and confidentiality obligations with middleware providers, outsourcing partners, and service vendors. An IT team must understand which tools and environments are subject to these agreements, and ensure that access provisioning, data handling, and network segmentation reflect those contractual boundaries. A vendor NDA breach caused by misconfigured access is not a hypothetical. It is a career-ending event for the people responsible.
None of this is exotic knowledge. But it is specialised, and it is rarely part of a traditional IT training path. The expectation that a generalist helpdesk background prepares someone for this environment is one of the most common and most costly misconceptions in the industry.
The Scaling Problem: From One Project to Many
Perhaps nowhere is the gap between generalist IT and production-aware IT more painfully exposed than in mobile studios that experience rapid growth.
The pattern is familiar. A studio launches with a single game. The team is small. Infrastructure is lean, often held together with a combination of cloud services, manual processes, and institutional knowledge stored in a few people's heads. IT, if it exists as a distinct function at all, is reactive and informal. Tickets are Slack messages. Documentation is sparse. It works because the scale is manageable.
Then the game succeeds.
Revenue comes in. The studio greenlights a second project. Then a third. Hiring accelerates. Suddenly there are multiple teams, each with different engine versions, different backend stacks, different build requirements, and different release cadences. The infrastructure that comfortably supported thirty people working on one game cannot support a hundred and fifty people working on four.
This is where the cracks appear.
- Identity and access management becomes tangled. What started as a flat permission structure with everyone having access to everything must now be segmented by project, by discipline, by seniority. Platform holder NDAs may require that only specific employees can access certain repositories or dev kits. Onboarding a new hire used to take an afternoon. Now it takes days because nobody has documented which groups, tools, licences, and environments each role requires.
- Continuous Access Monitoring. Equally important is what happens after access is granted. Without continuous access monitoring, permissions accumulate and drift. An artist who moved from Project A to Project B six months ago may still have write access to both repositories. A contractor whose engagement ended may still have active credentials. Access reviews in a fast-moving studio feel like overhead until the audit, or the breach, arrives. Automated access monitoring and periodic entitlement reviews are not bureaucratic exercises. They are the minimum standard for a studio handling multiple projects under separate NDAs and compliance requirements.
- Build infrastructure does not scale linearly. A single build pipeline for one project is straightforward. Four concurrent pipelines, each with their own platform targets, signing configurations, and release branches, competing for the same build agents and artefact storage, is an entirely different problem. Build queues back up. Developers wait. Production slows.
- Tooling sprawl accelerates. Each new project team brings preferences. One team uses Jira, another prefers Linear. One team deploys backends on AWS, another inherited a GCP setup. Without intentional governance, the tool landscape fragments, and IT is left supporting an ever-expanding matrix of platforms with no standardisation and no leverage.
- Live operations multiply the surface area. A single live game requires monitoring, incident response, content deployment, and player-facing service management. Multiple live games multiply all of this. Each game has its own release calendar, its own event schedule, its own critical revenue periods. An outage during a limited-time event in one game is a revenue loss measured in real currency. IT must ensure that the infrastructure supporting these services is resilient, observable, and independently manageable.
- Technical debt compounds invisibly. The shortcuts that were acceptable at a smaller scale. Examples such as hardcoded configurations, manual deployment steps, undocumented server setups become liabilities. But there is rarely a mandate to address them because leadership is focused on shipping the next game. IT inherits this debt whether or not it was involved in creating it.
The studios that navigate this transition successfully are the ones where IT is involved early in the scaling conversation. Not after the third project has been greenlit and the infrastructure is already straining, but at the point where growth is being planned. IT needs a seat in that room, not to slow things down, but to ensure that the foundation can support what is being built on top of it.
The Cultural Disconnect
There is often a cultural gap between IT departments and production teams in game studios. Developers, artists, and designers are accustomed to working with cutting-edge technology. They push hardware to its limits. They customise their tools extensively. They expect rapid iteration and minimal friction.
In mobile studios, this culture runs particularly hot. The pace of live operations means that production teams are accustomed to shipping updates weekly, sometimes more frequently. They expect environments to be available, builds to be green, and deployments to be seamless. When IT introduces process — change windows, approval gates, access reviews — it can feel like friction being imposed by people who do not understand the urgency.
IT teams that approach this environment with a rigid, policy-first mindset will encounter resistance. Not because production teams are undisciplined, but because the nature of creative production demands flexibility that traditional IT governance models do not always accommodate.
This does not mean security and process should be abandoned. Far from it. As we discussed in Part 3 and Part 5 of this series, security posture and AI governance are non-negotiable. But the approach must be adapted to the context. Lockdown policies that work in a financial services firm will strangle a game studio. Approval workflows designed for quarterly software deployments will be incompatible with a production environment that deploys internal builds multiple times per day and pushes live content updates to millions of players on a weekly cadence.
The most effective IT teams in game studios are those that earn their seat at the production table. They attend sprint reviews. They understand milestone deliverables. They know what "alpha", "beta", and "gold master" mean in console development and what "soft launch", "global launch", and "LiveOps calendar" mean in mobile. They understand that a store submission deadline is not a suggestion. They are not waiting for tickets to arrive. They are anticipating the needs before they become blockers.
Building for the Next Shift
Given that the technology landscape in games will continue to shift, the question is not whether the next disruption is coming. It is whether IT is prepared to absorb it without falling behind.
For mobile studios, the next shifts are already visible on the horizon. Platform holders are tightening privacy controls further. Cross-play and cross-progression between mobile and other platforms are becoming player expectations. Cloud gaming is blurring the line between mobile and console entirely. AI-driven content pipelines are promising to accelerate production but introducing new infrastructure requirements and governance questions that most studios have not yet answered.
As AI tools enter the production pipeline, studios need clear policy frameworks governing their use - what data can be fed into third-party models, how generated assets are reviewed for IP compliance, and who approves the integration of new AI services into production workflows. IT is uniquely positioned to enforce these frameworks at the infrastructure level, controlling which services are accessible, how data flows between internal systems and external APIs, and ensuring that usage is logged and auditable. Without this, AI adoption becomes another vector for shadow IT, as discussed in Part 5 of this series.
Preparation means investing in several areas:
- Modular infrastructure. Design systems that can be reconfigured without being rebuilt from scratch. Containerised build environments, infrastructure-as-code, and abstracted storage layers all contribute to adaptability. For studios running multiple live games, modular infrastructure also means shared services such as centralised authentication, common monitoring stacks, and unified artefact repositories, that reduce duplication without creating dangerous single points of failure.
- Continuous learning. IT staff in game studios must be given time and resources to stay current with engine updates, platform SDK changes, and emerging tools. In mobile, this includes staying ahead of Apple's WWDC announcements, Google Play policy updates, and the evolving landscape of ad mediation, analytics, and attribution SDKs that live games depend upon. This is not a luxury. It is an operational necessity.
- Cross-functional relationships. IT should have direct lines of communication with technical directors, pipeline engineers, and production managers. When IT understands what production is building toward, it can provision proactively rather than reactively. In a multi-project studio, this means IT should have visibility into each project's roadmap, not just its current ticket queue.
- Documentation and knowledge transfer. The institutional knowledge of how a studio's pipeline works is often held by a handful of senior engineers. IT should actively participate in documenting these systems so that support continuity does not depend on individual availability. This is doubly important in fast-growing studios where the people who built the original infrastructure are increasingly consumed by the demands of the newest project and unavailable to support the systems they created.
A Note on Recognition
There is an uncomfortable truth worth stating plainly. IT in video game studios is frequently under-resourced, under-recognised, and under-represented in production decisions. Studios will spend millions on user acquisition campaigns and proprietary engine features while running their IT operations on minimal staff and constrained budgets. Mobile studios are especially prone to this because the perceived simplicity of the platform, phrases such as "it is just a mobile phone game" or "these are just casual games", mask the genuine complexity of the infrastructure required to develop, deploy, and operate live games at scale.
This is a structural problem, not an individual one. And it will not change until IT teams demonstrate, consistently, that they understand the production pipeline deeply enough to be considered part of it. This is not about seeking validation. It is about earning the influence needed to make infrastructure decisions that serve the studio's long-term health rather than merely reacting to its short-term emergencies.
Understanding the pipeline is not a bonus qualification. It is the baseline.
The State of IT in Games
The video game industry is similar to other technology sectors in its reliance on infrastructure, security, and operational discipline. It is fundamentally different in its pace of change, the density of its production pipelines, and the degree to which its supporting technology can be reshaped by external forces outside the studio's control.
Mobile game development amplifies these characteristics. The release cycles are faster. The platform shifts are more frequent and less predictable. The scaling challenges are more abrupt. And the expectation that IT can simply "keep things running" without deeply understanding what "things" are and how they connect is more dangerous.
IT teams in this space cannot afford to be generalists who happen to work in games. They need to be technologists who understand game production. The distinction matters because when the next platform shift arrives, when the next engine overhaul lands, when the next wave of tooling transforms how content is created, when the studio's third or fourth live game goes into production and the infrastructure must absorb it without collapsing, it will be the IT teams that understood the pipeline who adapt. Everyone else will be scrambling.
That is the reality of IT in video game studios. The ground moves. The question is whether you are building on bedrock or sand.
Discussion