Server-Client #
Server-client is suitable for applications that involve distributed data and processing across multiple components over a network.
Topology #
Basic component diagram:

- Component
- Clients: initiate requests.
- Servers: respond to requests and provide services/data.
- Connector:
- request/response (HTTP, REST, RPC) or message-based communication (queues/streams).
Server-client is widely adopted in various applications:
|
|
|
|
Traditionally, the server and client usually run on separate physical devices, which provides two benefits: (1) better efficiency because heavy computation can be offloaded to a more powerful (in terms of computing power or storage capacity) server machine; (2) separation of execution environments so that the client and server can use different software stacks to optimize for their own purposes.
Nowadays, as each and every device becomes more powerful, many servers and clients are actually running on the same device (but still in separate execution environments, i.e., containers / virtual machines); this is especially common in complex applications with lots of components.
|
|
The communication between servers and clients are usually remote calls over the network (similar to the event-based style).
One server can serve multiple clients (and allows new clients to plug in dynamically). There can also be multiple servers providing the same service (i.e., mirror servers) that a client can choose from.

Pros and cons #
- Pros
- Clear distribution of responsibilities: clients vs servers.
- Heterogeneity: clients and servers can be built on different stacks.
- Evolvability: servers can often be upgraded or replicated without changing clients if contracts are stable, and vice versa.
- Cons
- Reliability/Efficiency: now depends on the network connectivity.
- Security: sending sensitive data over the network can be a security risk.
- Scalability: service discovery and governance become harder as the number of servers grows.
Variants #
There are two noteworthy variants of server-client: microservices and serverless; they emerged as practical solutions to address the limitations of plain server-client style in complex applications. Both of them add constraints (e.g., on the topology, the lifecycle of the servers), which help simplify and automate the maintenance of large amounts of servers.
The cloud computing providers (e.g., Google Cloud, AWS, Azure) offer great infrastructure support for writing microservices and serverless applications. This includes automated deployment and scaling up servers, load balancing, monitoring and logging, etc.
Microservices #

The components in the microservices architectural style are called services (or microservices), which act as servers and clients at the same time. A service should provide a single functionality (sometimes just one step in a user case), and can call other services. The external user do not directly interact with the services; instead, they interact only with an API gateway, which proxies the requests to the appropriate services.
Typical infrastructure support for microservices includes:
- An orchestrator like Kubernetes to automate the deployment, scaling, and management of the services. Each service is executed in its own container thus can be implemented in its own software stack.
- A service mesh like Istio to ensure reliable and secure communication between the services. It also handles things like load balancing, circuit breaking, and provides observability on network traffic between the services.
- Application-level concerns like authentication and rate limiting are handled by the API gateway, so that individual services do not each need to implement these.
Serverless #

The serverless architectural style goes one step further than microservices by further constraining the components to be short-lived, stateless functions. Each function is provisioned on demand and destroyed after execution. The platform (not the developer) manages the function’s lifecycle, and decides when to scale up the containers and physical machines to handle the load. Serverless is often referred to as FaaS (Function as a Service).
One motivation for serverless is to let developers focus on the actual code rather than the infrastructure. Thus serverless typically comes with rich infrastructure support, in addition to everything in microservices:
- Frameworks for writing functions in various programming languages, e.g., Google Cloud Functions, AWS Lambda, Azure Functions.
- Since each function is stateless, a centralized database or object storage is used when persistent state or data storage is needed; this is usually provided by the platform so that all functions can easily use it.
The convenience of not worrying about the infrastructure comes at a cost on the function’s capabilities though. Each function is stateless and usually has a hard limit on its execution time (e.g., 15 minutes) and memory usage. A cold start can happen when a function is not used for a while (thus having no warm container available), which can impact latency. Serverless is a good fit when the load is bursty (occurring at random intervals) and latency is not a big concern.
Comparisons: Microservices vs Serverless #
| Microservices | Serverless | |
|---|---|---|
| Runtime | Runs 24/7 | Runs when triggered |
| Hosting | In house or on cloud | Tied to cloud provider |
| Functionality | Complex functionalities possible | Short-running simple operations |
| Cost | Expensive upfront | Reduced cost (pay-per-use) |
| Platforms | Google Cloud, AWS, Azure | Cloud Run, AWS Lambda, Azure Functions |
Real-world examples #
PostgreSQL (classic server-client database) #
PostgreSQL is a straightforward server-client system: the server provides a query interface, and many heterogeneous clients (apps, tools, services) connect concurrently. In terms of topology, the database server is the server component, client libraries/tools are clients, and the connector is the database protocol carrying SQL queries and results. Real deployments add connection pooling, replication, and access control to meet performance and reliability requirements.
Further reading: PostgreSQL documentation
nginx (server-client at Internet scale) #
nginx is a high-performance HTTP server and reverse proxy that sits between clients and upstream servers. It illustrates a common server-client adaptation: adding an intermediary server component that terminates client connections and forwards requests to upstream servers. The connectors are still HTTP (and related protocols), but the topology now includes reverse proxying, load balancing, and caching.
Further reading: The Architecture of Open Source Applications: nginx, nginx documentation
Online Boutique (microservices) #
Online Boutique is a cloud-native microservices demo by Google: an e-commerce store decomposed into 11 independently deployable services (Go, C#, Node.js, Python, Java) communicating over gRPC. Each microservice is a server component, the API gateway and service-to-service gRPC calls are key connectors, and Kubernetes handles orchestration. It illustrates polyglot development, database-per-service (Redis for carts, a JSON file for the product catalog), and observability via OpenTelemetry.
Further reading: GitHub repository, Google Cloud Architecture Center: e-commerce microservices
Bento (serverless video transcoding) #
Bento is an open-source serverless video transcoding pipeline deployed on AWS Lambda. A video is split into hundreds of small segments, and each segment is transcoded by a separate Lambda function in parallel, then merged back. This turns a multi-hour single-machine job into minutes. The components are stateless Lambda functions (triggered by S3 upload events), and all state is externalized to S3 (video files) and DynamoDB (job metadata) – a textbook application of the serverless variant’s constraints. The architecture directly exploits FaaS auto-scaling: up to 1,000 concurrent Lambda containers spin up within seconds and scale back to zero when done.
Further reading: Bento case study, GitHub repository





