Design Patterns in Python: Factory, Strategy, Observer, and Singleton

Here's the thing about design patterns, they're solutions to problems you might not have. They're useful, absolutely. But in Python? We've got some shortcuts that make the Gang of Four jealous.
You've probably heard the names: Factory, Strategy, Observer, Singleton. They're classics for a reason. They solve real problems: creating objects flexibly, swapping algorithms at runtime, loosely coupling publishers and subscribers, and ensuring one instance exists globally. But Python bends the rules. Sometimes the "proper" design pattern is overkill. Sometimes a module-level variable or a function does the job cleaner.
Design patterns originated in object-oriented languages like Java and C++ where verbosity is unavoidable, where you can't pass functions around, where modules don't carry state, where every abstraction needs a class. Python was built differently. It treats functions as first-class objects, makes modules inherently singleton-like, and gives you decorators as a language feature rather than a pattern you implement yourself. That changes everything. When you come to Python from a Java or C++ background, you can fall into the trap of bringing all that ceremony with you. This guide is your antidote to that habit.
In this guide, we'll walk through these four patterns (plus Adapter and Decorator), show you the traditional way, then the Pythonic way. By the end, you'll know when to use each, and more importantly, when to skip it. You'll also understand the reasoning behind each decision, because knowing the "what" without the "why" just turns you into a pattern-matching robot instead of a thoughtful engineer.
Let's go.
Table of Contents
- Why Patterns Matter in Python
- The Factory Pattern: Creating Objects Flexibly
- The Strategy Pattern: Swappable Algorithms
- The Observer Pattern: Event-Driven Architecture
- The Singleton Pattern: One Instance to Rule Them All
- The Adapter Pattern: Making Incompatible Things Work Together
- The Decorator Pattern: Adding Behavior Dynamically
- Real-World Example: Building a Plugin System
- Why Patterns Matter in Python (Revisited: Real Cost-Benefit Thinking)
- Pythonic Alternatives: A Summary
- Common Pattern Mistakes
- Common Pitfalls and How to Avoid Them
- Choosing Between Patterns and Simpler Solutions
- Anti-Patterns: When Design Patterns Go Wrong
- Making the Right Choice
- The Bottom Line
Why Patterns Matter in Python
Before we dive in, let's establish something important: design patterns are not about following rules. They're about communicating intent. When you see a Factory in a codebase, you immediately understand that object creation is centralized and flexible. When you spot an Observer, you know you're looking at an event-driven architecture. Patterns are vocabulary, shared language between developers that cuts through the noise and makes code reviews faster and onboarding smoother.
That said, Python's expressiveness means you often achieve the same communicative clarity with far less code. A dictionary mapping strings to classes communicates "this is a registry" just as clearly as a formal Factory class, sometimes more clearly, because there's less indirection to trace. The Pythonic approach isn't laziness; it's embracing what the language already gives you.
There's also a deeper reason to understand patterns even if you end up not using them verbatim: they teach you to recognize recurring structural problems. Once you've seen the Factory problem, where your code is littered with if animal_type == "dog": return Dog() checks, you'll spot it immediately in unfamiliar codebases. You'll know there's a better way. The pattern is the map; the Pythonic shortcut is the faster route to the same destination. You need to know both.
Finally, patterns matter in Python's AI/ML ecosystem specifically. Frameworks like PyTorch, scikit-learn, and FastAPI lean heavily on Strategy (interchangeable optimizers, transforms, models), Observer (callbacks, hooks, event loops), and Factory (model registries, dataset loaders). Reading those frameworks becomes dramatically easier when you recognize the patterns underneath the surface.
The Factory Pattern: Creating Objects Flexibly
The Problem: You have different object types, and deciding which one to create depends on runtime conditions. Hard-coding if/else chains everywhere is messy. You want a single place that handles object creation.
The moment you find yourself writing the same if/elif/else block in three different files just to decide which class to instantiate, you've found your Factory problem. The core issue is that object creation logic is scattered, meaning every new subclass forces you to hunt down every place that logic lives and add another branch. That's the definition of code that doesn't scale, and it's exactly what the Factory pattern was invented to address.
The Traditional Approach:
from abc import ABC, abstractmethod
# Abstract base class
class Animal(ABC):
@abstractmethod
def speak(self):
pass
class Dog(Animal):
def speak(self):
return "Woof!"
class Cat(Animal):
def speak(self):
return "Meow!"
class Bird(Animal):
def speak(self):
return "Tweet!"
# Factory class
class AnimalFactory:
@staticmethod
def create_animal(animal_type):
if animal_type == "dog":
return Dog()
elif animal_type == "cat":
return Cat()
elif animal_type == "bird":
return Bird()
else:
raise ValueError(f"Unknown animal: {animal_type}")
# Usage
factory = AnimalFactory()
dog = factory.create_animal("dog")
print(dog.speak()) # Woof!This works, but it's boilerplate-heavy. Every time you add a new animal type, you modify the factory. That's not scaling well. More critically, modifying the factory class every time you add a new type violates the Open/Closed Principle, your code should be open for extension but closed for modification. The factory as written is a bottleneck: every new animal type requires a change in the same place, increasing the risk of introducing bugs and making the factory a source of merge conflicts on teams.
The Pythonic Alternative:
from abc import ABC, abstractmethod
class Animal(ABC):
@abstractmethod
def speak(self):
pass
class Dog(Animal):
def speak(self):
return "Woof!"
class Cat(Animal):
def speak(self):
return "Meow!"
class Bird(Animal):
def speak(self):
return "Tweet!"
# Registry dictionary
ANIMALS = {
"dog": Dog,
"cat": Cat,
"bird": Bird,
}
def create_animal(animal_type):
animal_class = ANIMALS.get(animal_type)
if not animal_class:
raise ValueError(f"Unknown animal: {animal_type}")
return animal_class()
# Usage
dog = create_animal("dog")
print(dog.speak()) # Woof!See the difference? No factory class. Just a dictionary mapping types to classes and a simple function. Want to add a new animal? Just add it to ANIMALS. The function stays untouched. This is extensibility without the class overhead. The dictionary is the registry, the function is the factory, and the whole thing fits in your head at once. Notice also that Python classes are first-class objects, you can store them in dictionaries, pass them to functions, and call them just like any other callable. That's the feature that makes this pattern so natural here.
Even More Pythonic: Using a Decorator
If you want auto-registration (so you don't have to manually update the dictionary), use a decorator:
from abc import ABC, abstractmethod
ANIMALS = {}
def register(name):
def wrapper(cls):
ANIMALS[name] = cls
return cls
return wrapper
class Animal(ABC):
@abstractmethod
def speak(self):
pass
@register("dog")
class Dog(Animal):
def speak(self):
return "Woof!"
@register("cat")
class Cat(Animal):
def speak(self):
return "Meow!"
@register("bird")
class Bird(Animal):
def speak(self):
return "Tweet!"
def create_animal(animal_type):
animal_class = ANIMALS.get(animal_type)
if not animal_class:
raise ValueError(f"Unknown animal: {animal_type}")
return animal_class()
# Usage
dog = create_animal("dog")
print(dog.speak()) # Woof!Now adding a new animal type is just one line: the decorator. The factory function never changes. This scales beautifully. The registration happens at class definition time, so you can't accidentally forget to register a class, the definition and the registration live right next to each other. This is exactly the pattern PyTorch uses for its optimizer and layer registries, and scikit-learn uses for its estimator registry. When you see it in those frameworks, you'll recognize it immediately.
When to Use Factory:
- You have multiple related classes and the client shouldn't care which one to instantiate.
- The specific class depends on configuration, user input, or runtime state.
- You want to centralize object creation logic.
When It's Overkill:
- You only have one or two subclasses.
- The creation logic is trivial.
- Direct instantiation (
Dog()) is clear enough.
The Strategy Pattern: Swappable Algorithms
The Problem: You have an operation that can be performed in multiple ways. You want to pick the algorithm at runtime without littering your code with conditionals. Each algorithm should be independent and easily swappable.
Think about a payment system that needs to handle credit cards, PayPal, and cryptocurrency. Or a data pipeline that can compress output with gzip, zstd, or lz4. Or a machine learning training loop that can use SGD, Adam, or RMSProp. In each case, the surrounding infrastructure is identical, only the core algorithm differs. The Strategy pattern says: extract that algorithm into its own object (or function) and inject it. Your main code stops caring about the details of how the operation is performed and focuses only on when and with what data.
The Traditional Approach:
from abc import ABC, abstractmethod
# Strategy interface
class PaymentStrategy(ABC):
@abstractmethod
def pay(self, amount):
pass
class CreditCardPayment(PaymentStrategy):
def __init__(self, card_number):
self.card_number = card_number
def pay(self, amount):
return f"Paid ${amount} with credit card {self.card_number[-4:]}"
class PayPalPayment(PaymentStrategy):
def __init__(self, email):
self.email = email
def pay(self, amount):
return f"Paid ${amount} via PayPal ({self.email})"
class CryptoCurrencyPayment(PaymentStrategy):
def __init__(self, wallet_address):
self.wallet_address = wallet_address
def pay(self, amount):
return f"Paid ${amount} in crypto to {self.wallet_address}"
# Context class
class Checkout:
def __init__(self, strategy):
self.strategy = strategy
def process_payment(self, amount):
return self.strategy.pay(amount)
# Usage
checkout = Checkout(CreditCardPayment("1234-5678-9012-3456"))
print(checkout.process_payment(99.99)) # Paid $99.99 with credit card 3456
checkout.strategy = PayPalPayment("user@example.com")
print(checkout.process_payment(49.99)) # Paid $49.99 via PayPal (user@example.com)This is clean and extensible. Each payment method is its own class. You swap them at runtime. But it's also a lot of ceremony. You've written three classes, one abstract base class, and one context class to do something that's essentially "call a different function depending on user choice." In Java, this is unavoidable. In Python, you have options.
The Pythonic Alternative:
# Strategy as a dictionary of functions
def pay_with_credit_card(amount, card_number):
return f"Paid ${amount} with credit card {card_number[-4:]}"
def pay_with_paypal(amount, email):
return f"Paid ${amount} via PayPal ({email})"
def pay_with_crypto(amount, wallet_address):
return f"Paid ${amount} in crypto to {wallet_address}"
# Strategies are just functions
PAYMENT_STRATEGIES = {
"credit_card": pay_with_credit_card,
"paypal": pay_with_paypal,
"crypto": pay_with_crypto,
}
class Checkout:
def __init__(self, strategy_name, strategy_params):
self.strategy = PAYMENT_STRATEGIES[strategy_name]
self.params = strategy_params
def process_payment(self, amount):
return self.strategy(amount, **self.params)
# Usage
checkout = Checkout("credit_card", {"card_number": "1234-5678-9012-3456"})
print(checkout.process_payment(99.99)) # Paid $99.99 with credit card 3456
checkout = Checkout("paypal", {"email": "user@example.com"})
print(checkout.process_payment(49.99)) # Paid $49.99 via PayPal (user@example.com)No abstract base classes. No strategy interface. Just functions. PAYMENT_STRATEGIES is a dictionary mapping names to callables. This is simpler, more Pythonic, and it does the exact same thing. The key insight is that in Python, a function already encapsulates a reusable behavior, you don't need a class to wrap it just to make it swappable. You can store functions in variables, put them in dictionaries, and pass them around freely.
Or Use First-Class Functions Directly:
def checkout(payment_fn, amount, **params):
return payment_fn(amount, **params)
# Usage
result = checkout(pay_with_credit_card, 99.99, card_number="1234-5678-9012-3456")
print(result) # Paid $99.99 with credit card 3456No class wrapper needed. Just pass the function and call it. Python treats functions as first-class citizens. Use that. This is the most distilled version of the Strategy pattern: you're literally just passing the algorithm as an argument. If your mental model of "design patterns" requires a class, you're missing one of Python's greatest strengths.
When to Use Strategy:
- You have multiple algorithms that solve the same problem.
- You want to pick the algorithm at runtime.
- Each algorithm is complex enough to deserve its own class.
- You need to add new algorithms frequently without changing existing code.
When It's Overkill:
- The algorithms are simple (a few lines each).
- You rarely switch between them.
- A simple
if/elifchain is clear enough.
The Observer Pattern: Event-Driven Architecture
The Problem: You have a publisher that generates events. Multiple subscribers need to react to those events, but the publisher shouldn't know about them. When something happens, notify everyone who's listening.
This pattern shows up everywhere once you know to look for it. A user updates their profile, we need to send a confirmation email, update the search index, and log the action. A sensor detects an anomaly, three different monitoring systems need to fire alerts. A training epoch ends in your ML pipeline, the logger, the checkpoint saver, and the learning rate scheduler all need to respond. The publisher doesn't want to be responsible for managing all those downstream reactions. It just wants to say "something happened" and let the listeners deal with the rest. That's loose coupling, and it's the heart of the Observer pattern.
The Traditional Approach:
from abc import ABC, abstractmethod
# Observer interface
class Observer(ABC):
@abstractmethod
def update(self, event):
pass
# Publisher
class EventPublisher:
def __init__(self):
self._observers = []
def subscribe(self, observer):
if observer not in self._observers:
self._observers.append(observer)
def unsubscribe(self, observer):
self._observers.remove(observer)
def emit(self, event):
for observer in self._observers:
observer.update(event)
# Concrete observers
class EmailNotifier(Observer):
def update(self, event):
print(f"Email sent: {event}")
class SlackNotifier(Observer):
def update(self, event):
print(f"Slack message: {event}")
class LoggerObserver(Observer):
def update(self, event):
print(f"Logged: {event}")
# Usage
publisher = EventPublisher()
email = EmailNotifier()
slack = SlackNotifier()
logger = LoggerObserver()
publisher.subscribe(email)
publisher.subscribe(slack)
publisher.subscribe(logger)
publisher.emit("Server is down!")
# Email sent: Server is down!
# Slack message: Server is down!
# Logged: Server is down!This works. It decouples the publisher from the subscribers. But it's verbose. You're defining an abstract Observer class just to enforce a update method contract, something Python's duck typing handles for free. Every new subscriber needs its own class, even if it's just wrapping a print statement.
The Pythonic Alternative:
class EventPublisher:
def __init__(self):
self._listeners = {}
def subscribe(self, event_name, callback):
if event_name not in self._listeners:
self._listeners[event_name] = []
self._listeners[event_name].append(callback)
def emit(self, event_name, **kwargs):
if event_name in self._listeners:
for callback in self._listeners[event_name]:
callback(**kwargs)
# Usage
publisher = EventPublisher()
def send_email(message):
print(f"Email sent: {message}")
def post_to_slack(message):
print(f"Slack message: {message}")
def log_event(message):
print(f"Logged: {message}")
publisher.subscribe("alert", send_email)
publisher.subscribe("alert", post_to_slack)
publisher.subscribe("alert", log_event)
publisher.emit("alert", message="Server is down!")
# Email sent: Server is down!
# Slack message: Server is down!
# Logged: Server is down!No observer class. No interface. Just functions as listeners. The EventPublisher stores callbacks in a dictionary and calls them when events fire. Simpler, cleaner, more Pythonic. Notice the event-name-based routing too, now you can have different callbacks for different event types on the same publisher, which is far more powerful than a single update method that has to inspect the event itself to decide what to do.
Using Python's signal Module:
Python has a built-in module for this pattern (in a way). Third-party libraries like blinker are also excellent:
from blinker import signal
# Define a signal
server_alert = signal("server-alert")
# Define listeners
@server_alert.connect
def send_email(sender, message=None):
print(f"Email sent: {message}")
@server_alert.connect
def post_to_slack(sender, message=None):
print(f"Slack message: {message}")
# Emit the signal
server_alert.send("system", message="Server is down!")
# Email sent: Server is down!
# Slack message: Server is down!blinker is designed exactly for this. It's clean, Pythonic, and used in Flask, Django, and other frameworks. If you need Observer, use it. The decorator-based subscription (@server_alert.connect) means your handler and its registration live right next to each other, making the codebase dramatically easier to audit when you're debugging why a particular action is or isn't firing.
When to Use Observer:
- You have a publisher that generates events.
- Multiple subscribers need to react independently.
- Subscribers should be loosely coupled from the publisher.
- The number of subscribers is dynamic (they come and go at runtime).
When It's Overkill:
- You have only one or two listeners.
- The relationship between publisher and subscriber is static.
- A simple callback or handler is enough.
The Singleton Pattern: One Instance to Rule Them All
The Problem: You want to ensure only one instance of a class exists globally. Database connections, configuration managers, loggers, these should only exist once.
The Singleton pattern addresses a specific, practical problem: resource management. A database connection pool shouldn't spawn a new pool every time a class is instantiated. A configuration object shouldn't reload from disk on every access. A logger shouldn't have a dozen independent instances all writing to different file handles. These resources are expensive to create, need to share state across the application, and often represent a finite resource like a network connection or file lock. The Singleton guarantees you don't accidentally create duplicates.
The Traditional Approach:
class DatabaseConnection:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._instance._initialized = False
return cls._instance
def __init__(self):
if self._initialized:
return
self.connection = None
self._initialized = True
def connect(self, host, port):
self.connection = f"Connected to {host}:{port}"
return self.connection
# Usage
db1 = DatabaseConnection()
db1.connect("localhost", 5432)
print(db1.connection) # Connected to localhost:5432
db2 = DatabaseConnection()
print(db2 is db1) # True
print(db2.connection) # Connected to localhost:5432This uses __new__ to intercept object creation and return the same instance every time. It works, but it's convoluted. You have to manage _instance and _initialized flags. Ugh. The _initialized guard is especially easy to mess up, if you forget it, __init__ runs every time you call DatabaseConnection(), which resets your state even though the instance is the same. That's the kind of subtle bug that takes hours to find.
The Pythonic Alternative #1: Module-Level Singleton
class _DatabaseConnection:
def __init__(self):
self.connection = None
def connect(self, host, port):
self.connection = f"Connected to {host}:{port}"
return self.connection
# The singleton instance
db = _DatabaseConnection()
# Usage
db.connect("localhost", 5432)
print(db.connection) # Connected to localhost:5432Just create the instance once at module load time. Python modules are singletons, they're loaded once and cached. Export the instance, not the class. Done. The underscore prefix on _DatabaseConnection signals to other developers that they shouldn't be instantiating this class directly, they should use the db instance that the module provides. This is a convention, not enforcement, but conventions are often enough.
# client.py
from database import db
db.connect("localhost", 5432)
print(db.connection) # Connected to localhost:5432This is the most Pythonic approach. No magic. No metaclasses. Just module-level state. It also has a nice side effect: it's trivially testable. In tests, you can just replace db with a mock object in the module's namespace, which is far easier than working around a __new__-based Singleton that actively resists multiple instances.
The Pythonic Alternative #2: Decorator-Based Singleton
If you like the class interface but want to ensure only one instance, use a decorator:
def singleton(cls):
instances = {}
def get_instance(*args, **kwargs):
if cls not in instances:
instances[cls] = cls(*args, **kwargs)
return instances[cls]
return get_instance
@singleton
class DatabaseConnection:
def __init__(self):
self.connection = None
def connect(self, host, port):
self.connection = f"Connected to {host}:{port}"
return self.connection
# Usage
db1 = DatabaseConnection()
db1.connect("localhost", 5432)
print(db1.connection) # Connected to localhost:5432
db2 = DatabaseConnection()
print(db2 is db1) # TrueThe decorator wraps the class and returns a function that manages instances. Every time you call DatabaseConnection(), you get the same instance. Clean, readable, no __new__ magic. The instances dictionary is captured in the closure, so it persists for the lifetime of the program. This approach also composes, you can apply the @singleton decorator to as many classes as you want without any changes to the decorator itself.
The Pythonic Alternative #3: Using a Metaclass
If you want to enforce singleton behavior across multiple classes:
class SingletonMeta(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in SingletonMeta._instances:
SingletonMeta._instances[cls] = super().__call__(*args, **kwargs)
return SingletonMeta._instances[cls]
class DatabaseConnection(metaclass=SingletonMeta):
def __init__(self):
self.connection = None
def connect(self, host, port):
self.connection = f"Connected to {host}:{port}"
return self.connection
# Usage
db1 = DatabaseConnection()
db1.connect("localhost", 5432)
db2 = DatabaseConnection()
print(db2 is db1) # TrueA metaclass intercepts class creation. Every time the class is instantiated, the metaclass checks if an instance exists. If not, it creates one. If yes, it returns the existing one. Elegant. The metaclass approach is the most "proper" OOP version of this pattern in Python, and it's worth knowing because you'll encounter it in mature libraries. However, it's also the most opaque, readers unfamiliar with metaclasses will be confused, so reserve it for cases where you genuinely need singleton enforcement across a large class hierarchy.
When to Use Singleton:
- You have a resource that should exist only once (database connection, logger, config).
- Multiple parts of your code need to access this resource.
- You want to centralize access.
When It's Overkill:
- The module-level instance pattern works fine and is simpler.
- You don't actually need a class (a module with module-level state is enough).
- The "singleton" is easy to instantiate and stateless anyway.
The Adapter Pattern: Making Incompatible Things Work Together
The Problem: You have an existing interface, and you need to use a third-party library with a different interface. Rather than changing your code or the library, you adapt one to the other.
The Traditional Approach:
# Legacy interface (what your code expects)
class LegacyDataProcessor:
def process(self, data):
print(f"Processing with legacy system: {data}")
# Third-party library (incompatible interface)
class ModernDataService:
def transform(self, input_data):
return f"Transformed: {input_data}"
# Adapter
class DataServiceAdapter:
def __init__(self, service):
self.service = service
def process(self, data):
# Convert the interface
result = self.service.transform(data)
return result
# Usage
legacy = LegacyDataProcessor()
modern = ModernDataService()
adapter = DataServiceAdapter(modern)
# Now you can use ModernDataService with the legacy interface
adapter.process("raw data") # Transformed: raw dataThe adapter wraps the incompatible object and translates method calls. The beauty here is that the rest of your codebase only ever sees the process interface, it has no idea there's a ModernDataService underneath. When you eventually switch to a third library with a convert method, you just write a new adapter and nothing else changes.
The Pythonic Alternative:
class ModernDataService:
def transform(self, input_data):
return f"Transformed: {input_data}"
# Just add a method to the interface
def adapt_modern_service(service):
def process(data):
return service.transform(data)
return {"process": process}
service = ModernDataService()
adapted = adapt_modern_service(service)
adapted["process"]("raw data") # Transformed: raw dataOr use a wrapper function:
def modern_to_legacy(modern_service):
"""Convert ModernDataService to expected legacy interface."""
return type("Adapted", (), {
"process": lambda self, data: modern_service.transform(data)
})()
adapted = modern_to_legacy(service)
adapted.process("raw data") # Transformed: raw dataIn practice, if you control the code, you'd just call the right method:
service = ModernDataService()
result = service.transform("raw data") # Just call transform directlyDon't add indirection if you don't need it. The Adapter is one of the patterns most commonly over-applied. Before reaching for it, ask yourself: can I just rename a method, update the caller, or use a thin wrapper function? The full Adapter class shines when you have a stable interface that dozens of callers depend on and you need to plug in a new backend without touching any of those callers.
When to Use Adapter:
- You're integrating a third-party library with an incompatible interface.
- You can't modify the library or your existing code.
- You want to isolate the incompatibility in one place.
When It's Overkill:
- You control both sides of the interface (just change one).
- The adaptation is trivial (you'd add more confusion than clarity).
- A simple wrapper function is clearer than a full adapter class.
The Decorator Pattern: Adding Behavior Dynamically
The Problem: You want to add new functionality to an object without modifying its class. Decorators let you wrap an object and add behavior around it.
The Traditional Approach:
from abc import ABC, abstractmethod
# Component interface
class Coffee(ABC):
@abstractmethod
def cost(self):
pass
@abstractmethod
def description(self):
pass
# Concrete component
class SimpleCoffee(Coffee):
def cost(self):
return 2.00
def description(self):
return "Simple coffee"
# Decorator
class CoffeeDecorator(Coffee):
def __init__(self, coffee):
self.coffee = coffee
@abstractmethod
def cost(self):
pass
@abstractmethod
def description(self):
pass
# Concrete decorators
class MilkDecorator(CoffeeDecorator):
def cost(self):
return self.coffee.cost() + 0.50
def description(self):
return f"{self.coffee.description()}, milk"
class SugarDecorator(CoffeeDecorator):
def cost(self):
return self.coffee.cost() + 0.25
def description(self):
return f"{self.coffee.description()}, sugar"
# Usage
coffee = SimpleCoffee()
coffee = MilkDecorator(coffee)
coffee = SugarDecorator(coffee)
print(coffee.description()) # Simple coffee, milk, sugar
print(coffee.cost()) # 2.75This works beautifully. You start with a base object and wrap it with decorators, adding behavior at each layer. The wrapping is explicit and the resulting object is fully polymorphic, anything that accepts a Coffee will accept a SugarDecorator(MilkDecorator(SimpleCoffee())) without complaint.
The Pythonic Alternative: Python's @ Decorator
Wait, Python already has decorators! They're functions that wrap functions:
def log_calls(func):
def wrapper(*args, **kwargs):
print(f"Calling {func.__name__}")
result = func(*args, **kwargs)
print(f"{func.__name__} returned {result}")
return result
return wrapper
@log_calls
def add(a, b):
return a + b
add(2, 3)
# Calling add
# add returned 5This is the decorator pattern applied to functions. Much simpler than the class-based approach. Python's @ syntax is syntactic sugar for add = log_calls(add), you're literally replacing the function with a wrapped version. The original function is preserved inside the wrapper's closure, so all the original behavior still runs, just with the extra logic layered around it.
For the coffee example, we could use:
from functools import wraps
class Coffee:
def __init__(self, base_cost, base_description):
self.cost_value = base_cost
self.description_value = base_description
def cost(self):
return self.cost_value
def description(self):
return self.description_value
def add_ingredient(name, price):
def decorator(coffee_fn):
@wraps(coffee_fn)
def wrapper(*args, **kwargs):
coffee = coffee_fn(*args, **kwargs)
coffee.cost_value += price
coffee.description_value += f", {name}"
return coffee
return wrapper
return decorator
@add_ingredient("sugar", 0.25)
@add_ingredient("milk", 0.50)
def make_coffee():
return Coffee(2.00, "Simple coffee")
coffee = make_coffee()
print(coffee.description()) # Simple coffee, milk, sugar
print(coffee.cost()) # 2.75Or more Pythonically, just compute these on the fly:
class Coffee:
def __init__(self, base_cost, ingredients=None):
self.base_cost = base_cost
self.ingredients = ingredients or []
def cost(self):
return self.base_cost + sum(price for _, price in self.ingredients)
def description(self):
names = [name for name, _ in self.ingredients]
return f"Simple coffee{', ' + ', '.join(names) if names else ''}"
coffee = Coffee(2.00, [("milk", 0.50), ("sugar", 0.25)])
print(coffee.description()) # Simple coffee, milk, sugar
print(coffee.cost()) # 2.75This is simpler and avoids the wrapping overhead. Sometimes the right answer is to step back and model your data differently rather than stacking wrappers. A list of ingredients is just a list, you don't need eight classes to represent that.
When to Use Decorator (Class-Based):
- You need to dynamically add behavior to objects at runtime.
- You want to stack multiple behaviors (decorators on top of decorators).
- The component interface is complex and needs full abstraction.
When Python's @ Decorator is Better:
- You're decorating functions, not objects.
- You want to add cross-cutting concerns (logging, timing, caching).
- The code is cleaner without class wrapping.
When It's Overkill:
- A simple composition or inheritance works just as well.
- You only add one behavior (no stacking).
- A configuration object is simpler.
Real-World Example: Building a Plugin System
Let's tie these patterns together in a practical scenario: a plugin system that loads and executes third-party code dynamically.
Requirements:
- Plugins register themselves automatically.
- Each plugin has a unique algorithm for processing data.
- Plugins publish events when they're done.
- The system maintains a single configuration object.
- Plugins can be added without modifying the core system.
The Design:
from abc import ABC, abstractmethod
import importlib
import sys
# 1. Factory + Registry for plugin discovery
class PluginRegistry:
_plugins = {}
@classmethod
def register(cls, name):
"""Decorator to auto-register plugins."""
def wrapper(plugin_class):
cls._plugins[name] = plugin_class
return plugin_class
return wrapper
@classmethod
def load(cls, name):
"""Factory method to instantiate plugins."""
if name not in cls._plugins:
raise ValueError(f"Plugin '{name}' not found")
return cls._plugins[name]()
# 2. Strategy pattern for algorithm swapping
class ProcessorPlugin(ABC):
@abstractmethod
def process(self, data):
pass
@PluginRegistry.register("json")
class JSONProcessor(ProcessorPlugin):
def process(self, data):
import json
return json.dumps(data, indent=2)
@PluginRegistry.register("csv")
class CSVProcessor(ProcessorPlugin):
def process(self, data):
import csv
from io import StringIO
output = StringIO()
if isinstance(data, list) and data:
writer = csv.DictWriter(output, fieldnames=data[0].keys())
writer.writeheader()
writer.writerows(data)
return output.getvalue()
# 3. Observer pattern for event handling
class EventBus:
def __init__(self):
self._listeners = {}
def subscribe(self, event, callback):
if event not in self._listeners:
self._listeners[event] = []
self._listeners[event].append(callback)
def emit(self, event, **data):
if event in self._listeners:
for callback in self._listeners[event]:
callback(**data)
# 4. Singleton for configuration
class Config:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._instance.settings = {}
return cls._instance
def set(self, key, value):
self.settings[key] = value
def get(self, key, default=None):
return self.settings.get(key, default)
# 5. Orchestration
class PluginEngine:
def __init__(self):
self.config = Config()
self.events = EventBus()
self.events.subscribe("processing_started", self._on_started)
self.events.subscribe("processing_finished", self._on_finished)
def _on_started(self, **data):
print(f"Starting: {data}")
def _on_finished(self, **data):
print(f"Finished: {data}")
def process(self, plugin_name, data):
self.events.emit("processing_started", plugin=plugin_name)
processor = PluginRegistry.load(plugin_name)
result = processor.process(data)
self.events.emit("processing_finished", plugin=plugin_name, result=result)
return result
# Usage
if __name__ == "__main__":
engine = PluginEngine()
# Configure the system
engine.config.set("debug", True)
# Process data with different plugins
data = [{"name": "Alice", "age": 30}, {"name": "Bob", "age": 25}]
json_result = engine.process("json", data)
print("JSON Result:")
print(json_result)
print()
csv_result = engine.process("csv", data)
print("CSV Result:")
print(csv_result)Output:
Starting: {'plugin': 'json'}
JSON Result:
[
{
"name": "Alice",
"age": 30
},
{
"name": "Bob",
"age": 25
}
]
Finished: {'plugin': 'json', 'result': '[...]'}
Starting: {'plugin': 'csv'}
CSV Result:
name,age
Alice,30
Bob,25
Finished: {'plugin': 'csv', 'result': 'name,age\nAlice,30\nBob,25\n'}
This example combines:
- Factory + Registry for auto-discovering plugins
- Strategy for algorithm swapping
- Observer for event handling
- Singleton for centralized configuration
- Adapter (implicitly, if plugins had incompatible interfaces)
Each pattern solves one problem. Together, they create a flexible, extensible system. Notice how each pattern stays in its own lane, the PluginRegistry doesn't care about events, the EventBus doesn't know about plugins, and the Config singleton is blissfully unaware of both. That separation of concerns is the real payoff when patterns are applied correctly.
Why Patterns Matter in Python (Revisited: Real Cost-Benefit Thinking)
You might be thinking: if Python has simpler alternatives to most of these patterns, why learn the formal versions at all? The answer is that the formal versions are the conceptual substrate. You need to understand what a Strategy pattern is solving before you can confidently decide whether a dictionary of functions is sufficient or whether you actually need the class-based version. Knowing both gives you a choice. Knowing only the Pythonic shortcut means you'll sometimes reach for it in situations where it genuinely isn't enough.
There's also the team dimension to consider. If you're working with developers who come from Java, C#, or Go backgrounds, using the recognizable pattern structure (even if slightly simplified) gives them immediate orientation. Conversely, if your team is all Python veterans, the lighter functional versions will feel more natural and readable. The right choice depends on your audience.
Pythonic Alternatives: A Summary
Python's language features make many classic patterns either unnecessary or dramatically simpler. Here's the distilled version: use a dictionary instead of a Factory class, pass a function instead of implementing a Strategy interface, subscribe callbacks instead of creating Observer classes, and use a module-level variable instead of implementing __new__-based Singleton magic.
The underlying theme is that Python's functions are objects, its modules are namespaces with state, and its @ syntax already bakes the Decorator pattern into the language itself. When you find yourself writing class MyStrategy(ABC): @abstractmethod def execute(self): pass, stop and ask whether a callable with a clear name would do the same job in three lines instead of ten. Usually, it will. The times when you genuinely need the full class-based version are when the algorithm is stateful and complex, when you need type checking on the strategy object, or when you're designing a public API that needs to guide third-party implementers through an explicit interface contract.
Common Pattern Mistakes
Even experienced developers make these mistakes. Let's walk through the most painful ones so you don't have to learn them the hard way.
Mistake #1: Singleton as Global Mutable State. The Singleton pattern is often criticized, and rightly so, because it's global mutable state with extra steps. Every part of your code can read and modify it, making behavior hard to predict and test. The fix is to be disciplined about what goes into a Singleton. Configuration and logging are good candidates. Business logic state is not. If you catch yourself putting application data into a Singleton "for convenience," that's a red flag.
Mistake #2: Factory When You Only Have One Class. You have Dog and you write a DogFactory with one branch. That's ceremony without benefit. Factories justify themselves when you have three or more types and the selection logic is non-trivial. With one or two types, just instantiate directly.
Mistake #3: Observer Without Unsubscribe. If subscribers register but never unsubscribe, you accumulate dead listeners over time. In long-running applications this becomes a memory leak and a source of mysterious side effects from callbacks that should have stopped firing. Always implement and call an unsubscribe method, or use weak references in your listener list.
Mistake #4: Strategy With Stateful Functions. A Strategy should be stateless or at least have clearly managed state. If your "strategy" functions are secretly reaching into shared mutable state, you've lost the interchangeability that makes the pattern useful. Each strategy should be self-contained, given the same inputs, it should produce the same outputs regardless of what other code has run.
Mistake #5: Forgetting @functools.wraps. When writing function decorators, always use @wraps(func) from the functools module inside your wrapper. Without it, the wrapped function loses its __name__, __doc__, and __qualname__ attributes. This breaks introspection tools, documentation generators, and debuggers in subtle ways that are annoying to diagnose.
Common Pitfalls and How to Avoid Them
Pitfall #1: Instantiating Singletons Multiple Times
# Wrong
class Config:
_instance = None
def __new__(cls):
return cls._instance or super().__new__(cls)
config1 = Config()
config2 = Config() # Same instance
config1.debug = True
print(config2.debug) # True - but is this intentional?Better: Use module-level state or be explicit about the intention.
# Right
class _Config:
def __init__(self):
self.debug = False
config = _Config() # Single instance at module level
# Other modules import it
from config import config
config.debug = TruePitfall #2: Over-Registering in Factories
# Wrong
HANDLERS = {}
def register_handler(name):
def wrapper(fn):
HANDLERS[name] = fn
return fn
return wrapper
@register_handler("on_click")
@register_handler("on_hover")
@register_handler("on_focus")
def handle_event(event):
print(event)Registering the same handler under multiple names creates duplication and confusion. Either use different functions or a mapping.
# Right
HANDLERS = {
"on_click": handle_click,
"on_hover": handle_hover,
"on_focus": handle_focus,
}Pitfall #3: Tight Coupling in Observers
# Wrong
class User:
def __init__(self, name):
self.name = name
self.email_notifier = EmailNotifier() # Tightly coupled
def update_profile(self, new_name):
self.name = new_name
self.email_notifier.send(f"Profile updated to {new_name}")The User class depends directly on EmailNotifier. If you want to add SMS notifications, you have to modify User.
# Right
class User:
def __init__(self, name, notifiers=None):
self.name = name
self.notifiers = notifiers or []
def update_profile(self, new_name):
self.name = new_name
for notifier in self.notifiers:
notifier.notify(f"Profile updated to {new_name}")
# Usage
user = User("Alice", [EmailNotifier(), SMSNotifier()])Now User doesn't depend on specific notifiers. You can add any notifier without changing User.
Pitfall #4: Decorator Stacking Complexity
# Can be confusing
@log_calls
@validate_input
@cache_result
def expensive_function(x):
return x ** 2Too many decorators make the function's behavior hard to trace. Limit to 2-3 decorators per function.
# Better: Apply only what's necessary
@cache_result
@validate_input
def expensive_function(x):
return x ** 2
# Logging happens elsewhere, or use a separate concernChoosing Between Patterns and Simpler Solutions
Here's a practical guide for deciding when to use each pattern:
Factory Pattern Decision Flow:
- Do you have 3+ related classes? YES → Use Factory
- Is creation logic complex? YES → Use Factory
- Otherwise → Just instantiate directly
Strategy Pattern Decision Flow:
- Do you switch algorithms at runtime? YES → Use Strategy
- Is each algorithm complex? YES → Use Strategy
- Otherwise → Just call different functions
Observer Pattern Decision Flow:
- Do you have many subscribers (3+)? YES → Use Observer
- Do subscribers come and go dynamically? YES → Use Observer
- Otherwise → Just use callbacks
Singleton Decision Flow:
- Must only one instance exist? YES → Use Singleton
- Do multiple parts access it globally? YES → Use Singleton
- Otherwise → Pass it as an argument
Adapter Pattern Decision Flow:
- Are you integrating incompatible libraries? YES → Use Adapter
- Can you change one interface? NO → Use Adapter
- Otherwise → Just call the right method
Decorator Pattern Decision Flow:
- Must you add behavior dynamically? YES → Use Decorator
- Must you stack behaviors? YES → Use Decorator
- Otherwise → Use composition or inheritance
Anti-Patterns: When Design Patterns Go Wrong
Design patterns are tools, not commandments. Misuse them and you'll create more problems than you solve.
Anti-Pattern #1: Over-Engineering
You see a Factory pattern and start building factories everywhere:
# Don't do this
class UserFactory:
@staticmethod
def create_user(name, email):
return User(name, email)
# Just do this
def create_user(name, email):
return User(name, email)
# Or just do this
user = User(name, email)Not everything needs a pattern. If direct instantiation is clear, use it.
Anti-Pattern #2: God Objects
A single Singleton that does everything:
# Don't do this
class AppManager:
# Does logging, database, auth, config, notifications...
def log(self, msg): ...
def query(self, sql): ...
def authenticate(self, user, pwd): ...
def notify(self, msg): ...
# 200 methods...
# Separate concerns
class Logger: ...
class Database: ...
class AuthManager: ...
class Notifier: ...One object, one responsibility. Split it up.
Anti-Pattern #3: Premature Abstraction
Writing abstract interfaces before you know what they should be:
# Don't do this
class PaymentStrategy(ABC):
@abstractmethod
def process(self, amount):
pass
# You have one implementation
class StripePayment(PaymentStrategy):
def process(self, amount):
return stripe.charge(amount)
# Wait for the second implementation before abstractingWrite two implementations, then abstract. Not the other way around.
Anti-Pattern #4: Mixing Patterns
Using too many patterns in one module:
# Too many patterns in one file
factory = PaymentFactory() # Factory
checkout = Checkout(strategy) # Strategy
publisher.subscribe(observer) # Observer
singleton = DatabaseConnection() # Singleton
adapter = ServiceAdapter() # Adapter
# Each pattern adds complexity. Use patterns that solve real problems, not just for the sake of it.Each pattern adds complexity. Use only what you need.
Making the Right Choice
Here's a quick decision tree:
Need to create different object types? → Factory Pattern
- Or just use a dictionary of classes + a function? → Pythonic Factory
Need to swap algorithms at runtime? → Strategy Pattern
- Or just pass functions? → Pythonic Strategy
Need loosely coupled publishers and subscribers? → Observer Pattern
- Or just use callbacks? → Pythonic Observer
Need to ensure one instance exists globally? → Singleton Pattern
- Or just use a module-level instance? → Pythonic Singleton
Need to wrap an incompatible interface? → Adapter Pattern
- Or just call the right method? → Keep It Simple
Need to add behavior dynamically? → Decorator Pattern
- Or use Python's
@decorator or composition? → Pythonic Decorator
The Bottom Line
Design patterns aren't laws. They're solutions to recurring problems. But Python often has simpler solutions. Functions as first-class citizens, modules as singletons, dictionaries as registries, decorators built into the language, Python gives you tools that make many traditional patterns unnecessary.
Use patterns when:
- The problem is complex enough to warrant the abstraction.
- You have multiple implementations or behaviors.
- The pattern makes code clearer, not more convoluted.
Skip patterns when:
- A simple function or object does the job.
- The pattern adds more code than it removes.
- Your future self would be confused reading it.
Learning these patterns, even the ones you'll rarely use verbatim, sharpens your ability to recognize structural problems in code. You'll start seeing "this is a Factory problem" or "this codebase needs an Observer" before the mess gets out of hand. That pattern-recognition skill is what separates engineers who write functional code from engineers who write maintainable systems. The goal isn't to collect patterns like trading cards; it's to internalize the problems they solve so your instinct for clean architecture becomes second nature.
Code that's simple and readable beats a perfectly implemented design pattern every time. But the deepest code, the kind that scales effortlessly, onboards quickly, and survives contact with changing requirements, often carries the skeleton of a well-chosen pattern underneath its Pythonic surface. Know the classics. Know when to reach for the shortcut. And know the difference.
Keep it Pythonic. Keep it simple. Get stuff done.