System Design & Architecture
Part 0 of 9Ambassador Pattern: Offloading Connectivity Concerns to Helper Services
Ambassador Pattern: Offloading Connectivity Concerns to Helper Services
The Ambassador pattern creates helper services that send network requests on behalf of a consumer service. Unlike the sidecar pattern which handles all incoming and outgoing traffic, ambassadors specifically focus on outbound connectivity—managing connections to remote services with retry logic, circuit breaking, monitoring, and protocol translation.
The Problem Ambassador Solves
Legacy Application Connecting to Modern Cloud Services:
┌─────────────────────────────────────────────────────────────┐
│ Legacy Application │
│ (COBOL, Java 6, etc.) │
│ │
│ Problems when connecting to modern services: │
│ │
│ 1. Protocol Incompatibility │
│ └─ Application speaks HTTP/1.0, service requires HTTP/2│
│ └─ No gRPC support │
│ └─ No WebSocket capability │
│ │
│ 2. Authentication Complexity │
│ └─ OAuth2/OIDC token management │
│ └─ Certificate rotation │
│ └─ API key refresh │
│ │
│ 3. Resilience Patterns Missing │
│ └─ No built-in retry with backoff │
│ └─ No circuit breaker │
│ └─ No request hedging │
│ │
│ 4. Observability Gaps │
│ └─ No distributed tracing │
│ └─ Limited metrics export │
│ └─ Inconsistent logging format │
│ │
└─────────────────────────────────────────────────────────────┘
Ambassador Architecture
Ambassador Pattern Structure:
┌─────────────────────────────────────────────────────────────┐
│ Pod │
│ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Application Container │ │
│ │ │ │
│ │ // Application code - simple HTTP call │ │
│ │ response = http.get("http://localhost:8500/redis") │ │
│ │ │ │
│ │ // No awareness of: │ │
│ │ // - Redis cluster topology │ │
│ │ // - Connection pooling │ │
│ │ // - Retry logic │ │
│ │ // - Circuit breaking │ │
│ │ │ │
│ └───────────────────────────────────────────────────────┘ │
│ │ │
│ │ localhost:8500 │
│ ▼ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Ambassador Container │ │
│ │ │ │
│ │ ┌──────────────────────────────────────────────────┐ │ │
│ │ │ Connection Management │ │ │
│ │ │ - Connection pooling to upstream │ │ │
│ │ │ - Health checking │ │ │
│ │ │ - Failover between replicas │ │ │
│ │ └──────────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ ┌──────────────────────────────────────────────────┐ │ │
│ │ │ Protocol Translation │ │ │
│ │ │ - HTTP → Redis protocol │ │ │
│ │ │ - REST → gRPC │ │ │
│ │ │ - HTTP/1.1 → HTTP/2 │ │ │
│ │ └──────────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ ┌──────────────────────────────────────────────────┐ │ │
│ │ │ Resilience │ │ │
│ │ │ - Retries with exponential backoff │ │ │
│ │ │ - Circuit breaker │ │ │
│ │ │ - Timeout management │ │ │
│ │ └──────────────────────────────────────────────────┘ │ │
│ │ │ │
│ └───────────────────────────────────────────────────────┘ │
│ │ │
└────────────────────────────┼────────────────────────────────┘
│
▼ Redis Protocol (RESP)
┌─────────────────────────────────────────────────────────────┐
│ Redis Cluster │
│ [Master] ←→ [Replica] [Master] ←→ [Replica] │
│ │ │ │
│ └─────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Ambassador vs Sidecar vs Adapter
Pattern Comparison:
┌─────────────────────────────────────────────────────────────┐
│ AMBASSADOR │
│ Focus: Outbound connectivity to remote services │
│ │
│ ┌──────────┐ ┌──────────────┐ ┌──────────┐ │
│ │ App │ ────► │ Ambassador │ ────► │ External │ │
│ │ │ │ │ │ Service │ │
│ └──────────┘ └──────────────┘ └──────────┘ │
│ │
│ Use cases: │
│ - Legacy apps connecting to modern services │
│ - Protocol translation (REST→gRPC, HTTP→Redis) │
│ - Client-side resilience (retry, circuit breaker) │
│ - Connection pooling to external services │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ SIDECAR │
│ Focus: Augment both inbound and outbound traffic │
│ │
│ ┌──────────────┐ │
│ Inbound ──► │ Sidecar │ ──► App │
│ Outbound ◄── │ (Envoy) │ ◄── App │
│ └──────────────┘ │
│ │
│ Use cases: │
│ - Service mesh (mTLS, traffic management) │
│ - Observability (metrics, tracing, logging) │
│ - Security (authentication, authorization) │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ ADAPTER │
│ Focus: Transform/standardize inbound interfaces │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ External │ ────► │ Adapter │ ────► │ App │ │
│ │ Client │ │ │ │ │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │
│ Use cases: │
│ - Expose different API formats (REST, GraphQL) │
│ - Metrics format conversion (StatsD→Prometheus) │
│ - Legacy interface compatibility │
└─────────────────────────────────────────────────────────────┘
Implementation: Redis Ambassador
// Redis Ambassador - Handles connection pooling, failover, and protocol translation
package main
import (
"context"
"encoding/json"
"fmt"
"net/http"
"sync"
"time"
"github.com/go-redis/redis/v8"
"github.com/sony/gobreaker"
)
type RedisAmbassador struct {
clients []*redis.Client
activeIndex int
mu sync.RWMutex
circuitBreaker *gobreaker.CircuitBreaker
metrics *AmbassadorMetrics
}
type AmbassadorMetrics struct {
RequestsTotal int64
RequestsSuccessful int64
RequestsFailed int64
FailoversTotal int64
CircuitBreakerOpen int64
LatencyHistogram *Histogram
}
func NewRedisAmbassador(addresses []string) *RedisAmbassador {
clients := make([]*redis.Client, len(addresses))
for i, addr := range addresses {
clients[i] = redis.NewClient(&redis.Options{
Addr: addr,
DialTimeout: 5 * time.Second,
ReadTimeout: 3 * time.Second,
WriteTimeout: 3 * time.Second,
PoolSize: 10,
MinIdleConns: 2,
})
}
cb := gobreaker.NewCircuitBreaker(gobreaker.Settings{
Name: "redis-ambassador",
MaxRequests: 3, // Requests allowed in half-open state
Interval: 10 * time.Second, // Cyclic period for clearing counts
Timeout: 30 * time.Second, // Time to stay open before half-open
ReadyToTrip: func(counts gobreaker.Counts) bool {
failureRatio := float64(counts.TotalFailures) / float64(counts.Requests)
return counts.Requests >= 10 && failureRatio >= 0.6
},
OnStateChange: func(name string, from, to gobreaker.State) {
log.Printf("Circuit breaker %s: %s -> %s", name, from, to)
},
})
return &RedisAmbassador{
clients: clients,
activeIndex: 0,
circuitBreaker: cb,
metrics: &AmbassadorMetrics{},
}
}
// HTTP handler that translates HTTP requests to Redis commands
func (a *RedisAmbassador) ServeHTTP(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
start := time.Now()
a.metrics.RequestsTotal++
// Parse the command from HTTP request
cmd, err := a.parseCommand(r)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
// Execute through circuit breaker
result, err := a.circuitBreaker.Execute(func() (interface{}, error) {
return a.executeWithRetry(ctx, cmd, 3)
})
latency := time.Since(start)
a.metrics.LatencyHistogram.Observe(latency.Seconds())
if err != nil {
a.metrics.RequestsFailed++
if err == gobreaker.ErrOpenState {
a.metrics.CircuitBreakerOpen++
http.Error(w, "service temporarily unavailable", http.StatusServiceUnavailable)
return
}
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
a.metrics.RequestsSuccessful++
// Return result as JSON
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]interface{}{
"result": result,
"latency_ms": latency.Milliseconds(),
})
}
type RedisCommand struct {
Command string `json:"command"`
Args []string `json:"args"`
}
func (a *RedisAmbassador) parseCommand(r *http.Request) (*RedisCommand, error) {
// Support both REST-style and JSON body
switch r.Method {
case "GET":
// GET /redis/key/{key} -> GET key
key := r.URL.Query().Get("key")
return &RedisCommand{Command: "GET", Args: []string{key}}, nil
case "PUT", "POST":
// POST /redis with JSON body
var cmd RedisCommand
if err := json.NewDecoder(r.Body).Decode(&cmd); err != nil {
return nil, fmt.Errorf("invalid request body: %w", err)
}
return &cmd, nil
case "DELETE":
key := r.URL.Query().Get("key")
return &RedisCommand{Command: "DEL", Args: []string{key}}, nil
default:
return nil, fmt.Errorf("unsupported method: %s", r.Method)
}
}
func (a *RedisAmbassador) executeWithRetry(ctx context.Context, cmd *RedisCommand, maxRetries int) (interface{}, error) {
var lastErr error
for attempt := 0; attempt < maxRetries; attempt++ {
if attempt > 0 {
// Exponential backoff
backoff := time.Duration(1<<uint(attempt-1)) * 100 * time.Millisecond
select {
case <-ctx.Done():
return nil, ctx.Err()
case <-time.After(backoff):
}
}
client := a.getActiveClient()
result, err := a.executeCommand(ctx, client, cmd)
if err == nil {
return result, nil
}
lastErr = err
// Check if error is retryable
if !a.isRetryable(err) {
return nil, err
}
// Attempt failover to another replica
if a.shouldFailover(err) {
a.failover()
}
}
return nil, fmt.Errorf("max retries exceeded: %w", lastErr)
}
func (a *RedisAmbassador) executeCommand(ctx context.Context, client *redis.Client, cmd *RedisCommand) (interface{}, error) {
args := make([]interface{}, len(cmd.Args)+1)
args[0] = cmd.Command
for i, arg := range cmd.Args {
args[i+1] = arg
}
result := client.Do(ctx, args...)
return result.Result()
}
func (a *RedisAmbassador) getActiveClient() *redis.Client {
a.mu.RLock()
defer a.mu.RUnlock()
return a.clients[a.activeIndex]
}
func (a *RedisAmbassador) failover() {
a.mu.Lock()
defer a.mu.Unlock()
previousIndex := a.activeIndex
a.activeIndex = (a.activeIndex + 1) % len(a.clients)
a.metrics.FailoversTotal++
log.Printf("Failover: %d -> %d", previousIndex, a.activeIndex)
}
func (a *RedisAmbassador) isRetryable(err error) bool {
// Network errors and timeouts are retryable
if err == redis.Nil {
return false // Key not found is not retryable
}
return true
}
func (a *RedisAmbassador) shouldFailover(err error) bool {
// Failover on connection errors
return err != nil && err != redis.Nil
}
// Health check endpoint for the ambassador itself
func (a *RedisAmbassador) HealthCheck(w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 2*time.Second)
defer cancel()
client := a.getActiveClient()
if err := client.Ping(ctx).Err(); err != nil {
w.WriteHeader(http.StatusServiceUnavailable)
json.NewEncoder(w).Encode(map[string]interface{}{
"status": "unhealthy",
"error": err.Error(),
"active_client": a.activeIndex,
})
return
}
json.NewEncoder(w).Encode(map[string]interface{}{
"status": "healthy",
"active_client": a.activeIndex,
"circuit_breaker": a.circuitBreaker.State().String(),
})
}
func main() {
ambassador := NewRedisAmbassador([]string{
"redis-master:6379",
"redis-replica-1:6379",
"redis-replica-2:6379",
})
http.Handle("/redis", ambassador)
http.HandleFunc("/health", ambassador.HealthCheck)
http.HandleFunc("/metrics", ambassador.MetricsHandler)
log.Println("Redis Ambassador listening on :8500")
log.Fatal(http.ListenAndServe(":8500", nil))
}
gRPC Ambassador for Legacy HTTP Applications
// Ambassador that translates HTTP/REST to gRPC
package main
import (
"context"
"encoding/json"
"net/http"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/keepalive"
"google.golang.org/protobuf/encoding/protojson"
pb "myapp/payment/v1"
)
type GRPCAmassador struct {
conn *grpc.ClientConn
client pb.PaymentServiceClient
metrics *AmbassadorMetrics
}
func NewGRPCAmbassador(target string, tlsConfig *tls.Config) (*GRPCAmassador, error) {
// gRPC connection with production settings
conn, err := grpc.Dial(target,
grpc.WithTransportCredentials(credentials.NewTLS(tlsConfig)),
grpc.WithKeepaliveParams(keepalive.ClientParameters{
Time: 10 * time.Second,
Timeout: 3 * time.Second,
PermitWithoutStream: true,
}),
grpc.WithDefaultCallOptions(
grpc.MaxCallRecvMsgSize(4*1024*1024),
grpc.MaxCallSendMsgSize(4*1024*1024),
),
grpc.WithUnaryInterceptor(grpc_middleware.ChainUnaryClient(
grpc_retry.UnaryClientInterceptor(
grpc_retry.WithMax(3),
grpc_retry.WithBackoff(grpc_retry.BackoffExponential(100*time.Millisecond)),
grpc_retry.WithCodes(codes.Unavailable, codes.DeadlineExceeded),
),
grpc_prometheus.UnaryClientInterceptor,
otelgrpc.UnaryClientInterceptor(),
)),
)
if err != nil {
return nil, fmt.Errorf("failed to connect to gRPC server: %w", err)
}
return &GRPCAmassador{
conn: conn,
client: pb.NewPaymentServiceClient(conn),
metrics: &AmbassadorMetrics{},
}, nil
}
// HTTP to gRPC translation
func (a *GRPCAmassador) ServeHTTP(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Extract trace context from HTTP headers and propagate to gRPC
ctx = a.extractTraceContext(ctx, r)
// Route based on HTTP path and method
switch {
case r.Method == "POST" && r.URL.Path == "/api/payments/charge":
a.handleChargePayment(ctx, w, r)
case r.Method == "GET" && strings.HasPrefix(r.URL.Path, "/api/payments/"):
a.handleGetPayment(ctx, w, r)
case r.Method == "POST" && r.URL.Path == "/api/payments/refund":
a.handleRefundPayment(ctx, w, r)
default:
http.Error(w, "not found", http.StatusNotFound)
}
}
func (a *GRPCAmassador) handleChargePayment(ctx context.Context, w http.ResponseWriter, r *http.Request) {
// Parse HTTP JSON request
var httpReq struct {
Amount int64 `json:"amount"`
Currency string `json:"currency"`
Source string `json:"source"`
Description string `json:"description"`
Metadata map[string]string `json:"metadata"`
}
if err := json.NewDecoder(r.Body).Decode(&httpReq); err != nil {
a.writeError(w, http.StatusBadRequest, "invalid request body")
return
}
// Translate to gRPC message
grpcReq := &pb.ChargePaymentRequest{
Amount: &pb.Money{
Amount: httpReq.Amount,
Currency: httpReq.Currency,
},
Source: httpReq.Source,
Description: httpReq.Description,
Metadata: httpReq.Metadata,
}
// Set timeout from HTTP header or default
timeout := 30 * time.Second
if timeoutHeader := r.Header.Get("X-Request-Timeout"); timeoutHeader != "" {
if d, err := time.ParseDuration(timeoutHeader); err == nil {
timeout = d
}
}
ctx, cancel := context.WithTimeout(ctx, timeout)
defer cancel()
// Make gRPC call
start := time.Now()
grpcResp, err := a.client.ChargePayment(ctx, grpcReq)
latency := time.Since(start)
a.metrics.RecordCall("ChargePayment", latency, err)
if err != nil {
a.handleGRPCError(w, err)
return
}
// Translate gRPC response to HTTP JSON
httpResp := map[string]interface{}{
"id": grpcResp.PaymentId,
"status": grpcResp.Status.String(),
"amount": grpcResp.Amount.Amount,
"currency": grpcResp.Amount.Currency,
"created_at": grpcResp.CreatedAt.AsTime().Format(time.RFC3339),
}
w.Header().Set("Content-Type", "application/json")
w.Header().Set("X-Response-Time", latency.String())
json.NewEncoder(w).Encode(httpResp)
}
func (a *GRPCAmassador) handleGRPCError(w http.ResponseWriter, err error) {
// Translate gRPC status codes to HTTP status codes
code := status.Code(err)
httpStatus := grpcToHTTPStatus(code)
st, ok := status.FromError(err)
if !ok {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
response := map[string]interface{}{
"error": map[string]interface{}{
"code": st.Code().String(),
"message": st.Message(),
},
}
// Extract error details if present
for _, detail := range st.Details() {
switch d := detail.(type) {
case *errdetails.BadRequest:
var fieldErrors []map[string]string
for _, violation := range d.FieldViolations {
fieldErrors = append(fieldErrors, map[string]string{
"field": violation.Field,
"description": violation.Description,
})
}
response["error"].(map[string]interface{})["field_errors"] = fieldErrors
case *errdetails.RetryInfo:
w.Header().Set("Retry-After", d.RetryDelay.AsDuration().String())
}
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(httpStatus)
json.NewEncoder(w).Encode(response)
}
func grpcToHTTPStatus(code codes.Code) int {
switch code {
case codes.OK:
return http.StatusOK
case codes.InvalidArgument:
return http.StatusBadRequest
case codes.NotFound:
return http.StatusNotFound
case codes.AlreadyExists:
return http.StatusConflict
case codes.PermissionDenied:
return http.StatusForbidden
case codes.Unauthenticated:
return http.StatusUnauthorized
case codes.ResourceExhausted:
return http.StatusTooManyRequests
case codes.FailedPrecondition:
return http.StatusPreconditionFailed
case codes.Unavailable:
return http.StatusServiceUnavailable
case codes.DeadlineExceeded:
return http.StatusGatewayTimeout
default:
return http.StatusInternalServerError
}
}
func (a *GRPCAmassador) extractTraceContext(ctx context.Context, r *http.Request) context.Context {
// Extract W3C Trace Context headers
traceparent := r.Header.Get("traceparent")
tracestate := r.Header.Get("tracestate")
if traceparent != "" {
// Parse and create span context
propagator := propagation.TraceContext{}
ctx = propagator.Extract(ctx, propagation.HeaderCarrier(r.Header))
}
// Also support B3 format for backward compatibility
if b3 := r.Header.Get("X-B3-TraceId"); b3 != "" {
b3Propagator := b3.New()
ctx = b3Propagator.Extract(ctx, propagation.HeaderCarrier(r.Header))
}
return ctx
}
Authentication Ambassador
// Ambassador that handles OAuth2/OIDC token management
type AuthAmbassador struct {
tokenSource oauth2.TokenSource
httpClient *http.Client
targetURL string
mu sync.RWMutex
currentToken *oauth2.Token
}
func NewAuthAmbassador(config *oauth2.Config, targetURL string) *AuthAmbassador {
// Use client credentials flow
tokenSource := config.TokenSource(context.Background(), nil)
// Create HTTP client with connection pooling
transport := &http.Transport{
MaxIdleConns: 100,
MaxIdleConnsPerHost: 10,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
}
return &AuthAmbassador{
tokenSource: tokenSource,
httpClient: &http.Client{Transport: transport},
targetURL: targetURL,
}
}
func (a *AuthAmbassador) ServeHTTP(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Get valid token (refresh if needed)
token, err := a.getValidToken(ctx)
if err != nil {
http.Error(w, "authentication failed", http.StatusInternalServerError)
log.Printf("Failed to get token: %v", err)
return
}
// Create proxied request
targetURL := a.targetURL + r.URL.Path
if r.URL.RawQuery != "" {
targetURL += "?" + r.URL.RawQuery
}
proxyReq, err := http.NewRequestWithContext(ctx, r.Method, targetURL, r.Body)
if err != nil {
http.Error(w, "failed to create request", http.StatusInternalServerError)
return
}
// Copy headers from original request
for key, values := range r.Header {
// Skip hop-by-hop headers
if isHopByHop(key) {
continue
}
for _, value := range values {
proxyReq.Header.Add(key, value)
}
}
// Add authentication
proxyReq.Header.Set("Authorization", "Bearer "+token.AccessToken)
// Add request ID for tracing
requestID := r.Header.Get("X-Request-ID")
if requestID == "" {
requestID = uuid.New().String()
}
proxyReq.Header.Set("X-Request-ID", requestID)
// Forward request with retry
resp, err := a.doWithRetry(proxyReq, 3)
if err != nil {
http.Error(w, "upstream request failed", http.StatusBadGateway)
return
}
defer resp.Body.Close()
// Copy response headers
for key, values := range resp.Header {
for _, value := range values {
w.Header().Add(key, value)
}
}
w.WriteHeader(resp.StatusCode)
io.Copy(w, resp.Body)
}
func (a *AuthAmbassador) getValidToken(ctx context.Context) (*oauth2.Token, error) {
a.mu.RLock()
token := a.currentToken
a.mu.RUnlock()
// Check if token is still valid (with buffer for expiry)
if token != nil && token.Expiry.After(time.Now().Add(30*time.Second)) {
return token, nil
}
// Need to refresh
a.mu.Lock()
defer a.mu.Unlock()
// Double-check after acquiring write lock
if a.currentToken != nil && a.currentToken.Expiry.After(time.Now().Add(30*time.Second)) {
return a.currentToken, nil
}
// Get new token
newToken, err := a.tokenSource.Token()
if err != nil {
return nil, fmt.Errorf("failed to refresh token: %w", err)
}
a.currentToken = newToken
log.Printf("Token refreshed, expires at: %v", newToken.Expiry)
return newToken, nil
}
func (a *AuthAmbassador) doWithRetry(req *http.Request, maxRetries int) (*http.Response, error) {
var lastErr error
for attempt := 0; attempt < maxRetries; attempt++ {
if attempt > 0 {
// Clone request body for retry
if req.GetBody != nil {
body, err := req.GetBody()
if err != nil {
return nil, fmt.Errorf("failed to get request body: %w", err)
}
req.Body = body
}
// Exponential backoff
time.Sleep(time.Duration(1<<uint(attempt-1)) * 100 * time.Millisecond)
}
resp, err := a.httpClient.Do(req)
if err != nil {
lastErr = err
continue
}
// Retry on 5xx errors
if resp.StatusCode >= 500 && resp.StatusCode < 600 {
resp.Body.Close()
lastErr = fmt.Errorf("server error: %d", resp.StatusCode)
continue
}
// Handle 401 by refreshing token
if resp.StatusCode == 401 && attempt < maxRetries-1 {
resp.Body.Close()
a.mu.Lock()
a.currentToken = nil // Force refresh
a.mu.Unlock()
token, err := a.getValidToken(req.Context())
if err != nil {
return nil, fmt.Errorf("token refresh failed: %w", err)
}
req.Header.Set("Authorization", "Bearer "+token.AccessToken)
continue
}
return resp, nil
}
return nil, fmt.Errorf("max retries exceeded: %w", lastErr)
}
var hopByHopHeaders = map[string]bool{
"Connection": true,
"Keep-Alive": true,
"Proxy-Authenticate": true,
"Proxy-Authorization": true,
"Te": true,
"Trailers": true,
"Transfer-Encoding": true,
"Upgrade": true,
}
func isHopByHop(header string) bool {
return hopByHopHeaders[http.CanonicalHeaderKey(header)]
}
Kubernetes Deployment
# Ambassador deployment pattern
apiVersion: v1
kind: Pod
metadata:
name: legacy-app
labels:
app: legacy-app
spec:
containers:
# Legacy application
- name: legacy-app
image: legacy/mainframe-adapter:v1
ports:
- containerPort: 8080
env:
# Point to ambassador for external services
- name: REDIS_URL
value: "http://localhost:8500"
- name: PAYMENT_SERVICE_URL
value: "http://localhost:8501"
- name: AUTH_SERVICE_URL
value: "http://localhost:8502"
# Redis Ambassador
- name: redis-ambassador
image: myorg/redis-ambassador:v1.0
ports:
- containerPort: 8500
env:
- name: REDIS_ADDRESSES
value: "redis-master:6379,redis-replica-1:6379,redis-replica-2:6379"
- name: CIRCUIT_BREAKER_THRESHOLD
value: "0.5"
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
livenessProbe:
httpGet:
path: /health
port: 8500
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8500
initialDelaySeconds: 2
periodSeconds: 5
# Payment gRPC Ambassador
- name: payment-ambassador
image: myorg/grpc-http-ambassador:v1.0
ports:
- containerPort: 8501
env:
- name: GRPC_TARGET
value: "payment-service.payment:50051"
- name: TLS_ENABLED
value: "true"
- name: TLS_CERT_PATH
value: "/etc/certs/tls.crt"
volumeMounts:
- name: certs
mountPath: /etc/certs
readOnly: true
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
# Auth Ambassador (handles OAuth2 token management)
- name: auth-ambassador
image: myorg/auth-ambassador:v1.0
ports:
- containerPort: 8502
env:
- name: OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: oauth-credentials
key: client-id
- name: OAUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oauth-credentials
key: client-secret
- name: OAUTH_TOKEN_URL
value: "https://auth.example.com/oauth/token"
- name: TARGET_SERVICE_URL
value: "https://api.partner.com"
resources:
requests:
memory: "32Mi"
cpu: "25m"
limits:
memory: "64Mi"
cpu: "50m"
volumes:
- name: certs
secret:
secretName: service-tls-certs
---
# Separate ambassador for shared services (deployed as DaemonSet or Deployment)
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: shared-ambassador
namespace: infrastructure
spec:
selector:
matchLabels:
app: shared-ambassador
template:
metadata:
labels:
app: shared-ambassador
spec:
containers:
- name: ambassador
image: myorg/multi-protocol-ambassador:v1.0
ports:
- containerPort: 8600
hostPort: 8600
env:
- name: SERVICES_CONFIG
value: |
- name: elasticsearch
protocol: http
targets:
- es-master-0:9200
- es-master-1:9200
- es-master-2:9200
loadBalancing: roundRobin
healthCheck:
path: /_cluster/health
interval: 10s
- name: kafka
protocol: tcp
targets:
- kafka-0:9092
- kafka-1:9092
- kafka-2:9092
loadBalancing: leastConnections
Ambassador Pattern Decision Matrix
When to Use Ambassador:
┌────────────────────────────────────────────────────────────────┐
│ Scenario │ Use Ambassador? │
├────────────────────────────────────┼───────────────────────────┤
│ Legacy app → Modern cloud service │ ✅ Yes │
│ HTTP app → gRPC service │ ✅ Yes │
│ Complex auth (OAuth2/mTLS) │ ✅ Yes │
│ Single external dependency │ ✅ Yes (simpler than lib)│
│ Polyglot microservices │ ⚠️ Consider sidecar │
│ Both inbound + outbound concerns │ ❌ Use sidecar │
│ Simple direct HTTP calls │ ❌ Overkill │
│ High-perf/low-latency critical │ ❌ Use library │
└────────────────────────────────────┴───────────────────────────┘
Resource Overhead:
┌────────────────────────────────────────────────────────────────┐
│ Ambassador Type │ Memory │ CPU │ Latency │
├────────────────────────────────────────────────────────────────┤
│ Redis Ambassador │ 30-50MB │ 10-50m │ +0.5ms │
│ gRPC-HTTP Ambassador │ 40-80MB │ 20-100m │ +1ms │
│ Auth Ambassador │ 20-40MB │ 10-30m │ +0.3ms │
│ Multi-Protocol │ 60-120MB │ 50-150m │ +1-2ms │
└────────────────────────────────────────────────────────────────┘
Ambassador vs Library Approach
Trade-off Analysis:
Library Approach (SDK in each service):
┌────────────────────────────────────────────────────────────────┐
│ Pros: │ Cons: │
├──────────────────────────────────┼─────────────────────────────┤
│ ✅ No additional process │ ❌ N languages = N SDKs │
│ ✅ Lower latency │ ❌ Updates require redeploy│
│ ✅ Direct debugging │ ❌ Inconsistent behavior │
│ ✅ No serialization overhead │ ❌ Business code pollution │
└──────────────────────────────────┴─────────────────────────────┘
Ambassador Approach:
┌────────────────────────────────────────────────────────────────┐
│ Pros: │ Cons: │
├──────────────────────────────────┼─────────────────────────────┤
│ ✅ Language agnostic │ ❌ Additional latency │
│ ✅ Update without app redeploy │ ❌ Resource overhead │
│ ✅ Consistent behavior │ ❌ Another component to run│
│ ✅ Clean separation of concerns │ ❌ Debugging complexity │
│ ✅ Legacy app compatibility │ ❌ Additional serialization│
└──────────────────────────────────┴─────────────────────────────┘
Decision Framework:
┌────────────────────────────────────────────────────────────────┐
│ │
│ Single Language, High Performance → Library │
│ │ │
│ ▼ │
│ Multiple Languages OR Legacy Apps → Ambassador │
│ │ │
│ ▼ │
│ Full Service Mesh Features Needed → Sidecar │
│ │
└────────────────────────────────────────────────────────────────┘
The ambassador pattern provides a clean separation between application logic and connectivity concerns. It shines in scenarios involving legacy application modernization, protocol translation, and complex authentication flows. While it introduces latency and resource overhead compared to library-based approaches, the operational benefits—language independence, independent updates, and consistent behavior across services—often justify the trade-off in heterogeneous environments.
What did you think?