Function-as-a-service (FaaS) technologies, including AWS Lambda, Azure Functions and IBM\/Apache OpenWhisk, are experiencing mass adoption, even in private clouds, and it\u2019s easy to see why. The promise of serverless is simple: developers and IT teams can stop worrying about their infrastructure, system software and network configuration altogether. There\u2019s no need to load-balance, adjust resources for scale, monitor for network latency or CPU performance. Serverless computing can save you a lot of time, money and operational overhead, if you play your cards right.\nSay goodbye to the idle instance\nThere\u2019s also less waste with serverless computing. You only pay for infrastructure in the moment that code gets executed (or, each time a user processes a request). It\u2019s the end of the server that just sits there. But with all these advantages, IT practitioners are also faced with an avalanche of complexity and new challenges. \u00a0\nThe fundamental challenge of serverless computing is easy to imagine: if something is ephemeral, how do you observe for standard infrastructure metrics on health, uptime and availability? While serverless removes some of the heavy lifting associated with infrastructure management, there are a new set of issues which IT infrastructure teams will need to address:\nEfficient code is now business-critical\nNothing ruins your potential serverless cost savings like spinning up an instance to rewrite code or fix errors. You\u2019ll want more visibility into error handling and resource usage to understand where your serverless costs can be streamlined.\nResource optimization is now the responsibility of the serverless customer\nThe great advantages of public cloud, like efficient use of resources and service delivery, are as fleeting as the instance itself. Now, even though the serverless instance may be available, the overall responsibility for service availability shifts back to the IT operations team. \u00a0\nYou still need to invest in infrastructure monitoring\nYou still need visibility to ensure digital experiences and to prevent downtime and security breaches. Cloud providers offer some standard monitoring capabilities, but serverless computing makes end-to-end visibility more complex and existing legacy (and many cloud-only) monitoring tools won\u2019t cut it. You want a modern approach that can handle on-prem and multi-cloud environments as well as microservice architectures. If there's a different instance every single second (or less), imagine the challenges of monitoring these instances for uptime, availability, performance, or configuration. DevOps teams call this observability - tracking application performance (metrics, traces, and logs) from moment-to-moment. Monitoring also needs to be latency-sensitive and account for cold starts, or the lag of spinning up an instance. Your modern infrastructure monitoring solution should not discriminate between a start-up lag and an actual disruption.\nWelcome to application hyper-awareness\nIT operations teams will need to know more than ever about application usage and understand specific limitations of their FaaS application. You\u2019ll want to effectively track how the cloud provider executes and charges for the application code and what to fix when things go awry. All of these demands will require more modern solutions for serverless infrastructure management.\nAutomation is the rule, not the exception\nYou\u2019ll also need to build the right automation for effective monitoring of dynamic serverless workloads and to couple this with the insights of your experienced cloud engineers. There\u2019s just too much risk in leaving serverless monitoring completely to your cloud provider, but it\u2019s not something that can be done manually anymore.\nThe opportunity for modernization: Serverless can\u2019t be Ops-less\nAll of these demands refocus the challenges facing modern IT operations teams. Pivoting toward serverless requires new strategies to optimize infrastructure to support serverless development. It also requires a new framework of incident response (when incidents are as ephemeral as instances) and log management and analytics to track the speed of change. These new serverless demands are opportunities for ops teams to support efficient development while tracking and managing infrastructure that\u2019s invisible. And especially when resources need to be continuously tuned for efficient, cost-effective development, the demands of a flexible, agile digital operations framework is as important for ever.\nThe best way that IT operations can prepare for this change? Develop the skills. Find ways to consolidate tools. Simplify. Adopt a cloud-centric, integrated approach with monitoring and management to move with high velocity at scale. Gartner research Vice President Andrew Lerner said it well: \u201cWhile Serverless is hailed as the holy grail of \u201cNoOps\u201d, the reality is there is plenty of cloud centric operational know-how as well as security, monitoring, debugging skills that will be required to operate these in a production environment and\/or at a scale.\u201d\nServerless is a tremendous leap forward for infrastructure management. But, like any leap forward, it requires a re-assessment of how things are done. Infrastructure teams need a different type of governance and an extreme level of flexibility. They need to continue to embrace multi-cloud strategies, while taking on new responsibilities for optimization, utilization and governance. IT Ops won\u2019t become obsolete. But the sooner it can adopt a framework to support the evolution of serverless technology in the face of digital transformation, the sooner it can help the business realize even greater serverless value.