Serverless Functions: When Writing Just Code Is Enough

Understanding the Most Extreme (and Powerful) Form of Serverless Computing

9 min readInvalid Date
Serverless Functions: When Writing Just Code Is Enough
Photo by Sandeep Varma on EMDock

In a previous post, we explored serverless as a concept—not as a specific service, but as an idea:
the gradual abstraction away from servers so teams can focus on what they are building rather than where or how it runs.

In this post, we’ll zoom in on one specific — and increasingly popular — point on that spectrum:
serverless functions.

Serverless functions represent the most extreme version of serverless computing. They are powerful, elegant, and deceptively simple. But they are also easy to misuse if you don’t understand what problem they are actually solving.


From Applications to Functions

Traditionally, applications are built as long-running services.

Whether hosted in your own data center or on cloud virtual machines, the model was the same:

  • Provision a server
  • Install an operating system
  • Install a runtime (Java, Node.js, Python, Go, etc.)
  • Deploy the application
  • Keep it running, waiting for requests

Even after cloud providers removed the need to buy physical hardware, teams were still responsible for:

  • Capacity planning
  • Scaling decisions
  • Patching
  • Uptime

Serverless functions fundamentally change this model.


What Is a Serverless Function?

A serverless function is a small, self-contained piece of code that:

  • Performs a single task
  • Runs only when invoked
  • Stops executing once the task is complete

You do not manage:

  • Servers
  • Operating systems
  • Runtimes
  • Scaling rules
  • Patch upgrades

You provide the code. The cloud provider handles everything else.

Common examples include:

  • AWS Lambda
  • Google Cloud Functions
  • Azure Functions

Despite the name, servers still exist.
You just never see them, manage them, or think about them.


A Simpler Mental Model

Instead of deploying “an application,” you deploy behaviors.

For example:

  • One function returns a list of users
  • Another returns products
  • A third fetches transactions for a specific user

Each function:

  • Has a single responsibility
  • Can be invoked independently
  • Often has its own URL or event trigger

A request comes in, the function runs, and a response comes back.
Nothing stays running when it’s not needed.


A Useful Analogy: Serverless Functions as a Parking Lot

To make this more concrete, consider a parking lot outside a shopping mall.

Cars arrive, park, leave, and free up spots.
No spot is permanently assigned to any customer. Spots exist only to be used when needed.

Serverless functions work the same way:

  • Compute is allocated when a request arrives
  • The function executes
  • Resources are released immediately afterward

Nothing is kept running “just in case.”
This makes the system extremely efficient, especially for unpredictable traffic.


Where Cold Starts Come From

Now imagine the parking lot late at night.

The lot is empty. Gates are closed. Lights are off.

When the first car arrives in the morning, a few things must happen:

  • Gates open
  • Lights turn on
  • Systems wake up

That first car experiences a brief delay.

This is what a cold start is.

If a serverless function hasn’t run recently:

  • There is no active execution environment
  • One must be created before the code runs

Subsequent requests — like cars arriving after the lot is open — are usually fast.

Cold starts are not a flaw. They are the cost of efficiency.


The Real Value Proposition

The primary benefit of serverless functions is operational elimination.

Teams no longer manage:

  • Idle servers
  • Scaling logic
  • Instance failures
  • Patch cycles

Instead, effort shifts almost entirely to:

  • Business logic
  • Inputs and outputs
  • Error handling in code

For many teams, this dramatically reduces cost and complexity.


The Trade-Offs (The Asterisk)

Serverless functions are not a universal solution.

Key limitations include:

  • Cold starts, which can affect latency-sensitive systems
  • Execution limits on time, memory, and CPU
  • Distributed complexity as the number of functions grows

What feels simple at the function level can become complex at the system level if not designed carefully.


Serverless Functions on the Abstraction Spectrum

It’s important to remember:

Serverless functions are not “better” than servers.
They are more abstract.

They sit at the far end of a spectrum:

  • Self-managed servers
  • Managed servers
  • Managed runtimes
  • Serverless functions

Each step removes responsibility — and control.

Strong engineering teams choose the abstraction level that fits the problem, not the trend.


When Serverless Functions Work Best

Serverless functions shine when:

  • Workloads are event-driven
  • Execution time is short and predictable
  • Traffic is spiky or unpredictable
  • Teams want minimal operational overhead
  • Speed of iteration matters

They are often a great starting point — and sometimes the right long-term choice.


Closing Thoughts

Serverless functions represent the logical extreme of the serverless philosophy:
write only what matters and let everything else disappear.

Used thoughtfully, they can simplify systems and teams dramatically.
Used blindly, they can introduce new kinds of complexity.

In future posts, we’ll explore:

  • When not to use serverless functions
  • How they compare to container-based approaches
  • How teams evolve between different abstraction levels over time

What Do You Think?

Have you used serverless functions in production?

  • Did they simplify your system?
  • Did cold starts matter?
  • Where did the abstraction help — or hurt?

I’d love to hear your perspective.
The most valuable insights come from real-world experience.

Related posts
Serverless Explained: From Servers to Abstractions
Serverless Explained: From Servers to Abstractions

Understanding Serverless Without the Jargon

Comments

Loading comments...

Please log in to post a comment.