June 13, 2025
Python OOP Protocols Type Hints

Python Protocols and Structural Subtyping

Here is a scenario you have probably lived through. You are building a system, and you have this function that needs to accept objects that can be serialized, written to disk, sent over a network, stored in a cache. You define an abstract base class called Serializable, you add the serialize() and deserialize() methods, and everything looks clean. Then a teammate integrates a third-party library. That library has a DataRecord class that absolutely does what you need, it has serialize(), it has deserialize(), the signatures match perfectly. But it does not inherit from your Serializable. Now your type checker is screaming. Your options are grim: subclass DataRecord just to tag it with the right inheritance, write a wrapper class, or disable the type check and cross your fingers. None of those feel right, because none of them are right.

This exact problem plays out constantly in real codebases. The more you build systems that integrate with external libraries, framework plugins, or third-party data models, the more you run into the wall where inheritance-based typing fights against Python's dynamic, flexible nature. You want the type checker to catch real mistakes, passing a str where an object is expected, calling a method that does not exist, but you do not want it to penalize you for the crime of not inheriting from the "right" class. You want the freedom of duck typing and the safety net of static analysis at the same time.

That tension is exactly what Protocols (introduced in PEP 544, available since Python 3.8) were designed to resolve. Protocols bring structural subtyping to Python's type system, letting you define what an object needs to be able to do, its capabilities, its interface, without caring at all about its inheritance chain. If an object has the right methods with the right signatures, it is compatible. Full stop. The type checker agrees. Your code stays clean. No wrapper classes, no forced inheritance, no monkey-patching. We are going to cover how protocols work, when to use them, the gotchas that trip people up, and the specific situations where protocols are the cleanest tool available. Let's dig in.

Table of Contents
  1. The Problem: Nominal vs Structural Subtyping
  2. Structural vs Nominal Typing: The Deep Distinction
  3. Introducing Protocols: Structural Subtyping Done Right
  4. The Hidden Layer: Why This Matters
  5. `@runtime_checkable`: Making Protocols Enforceable at Runtime
  6. Runtime vs Static Protocol Checking
  7. When to Use: Protocol vs ABC vs Just Duck Typing
  8. When Protocols Beat ABCs
  9. Built-in Protocols: Sized, Iterable, Iterator, Hashable
  10. Sized Protocol
  11. Iterable Protocol
  12. Iterator Protocol
  13. Hashable Protocol
  14. Combining Protocols: Multiple Structural Contracts
  15. Protocol Inheritance: Building Complex Contracts
  16. Real-World Example: Library Design Without Forcing Inheritance
  17. Type Checker Behavior: How Mypy and Pyright See Protocols
  18. Common Protocol Mistakes
  19. Covariance and Contravariance in Protocols
  20. Practical Gotchas: What You Need to Know
  21. Gotcha 1: Runtime Checking Is Shallow
  22. Gotcha 2: Protocols Don't Prevent Instantiation
  23. Gotcha 3: Self Types in Protocols
  24. When NOT to Use Protocols
  25. Combining Protocols and ABCs: The Hybrid Approach
  26. Summary: Choose Your Subtyping Strategy

The Problem: Nominal vs Structural Subtyping

Before we jump into protocols, let's clarify what we're solving.

Nominal subtyping is what you know: a class is a subtype if it explicitly inherits from a parent class. The name, the class's declared lineage, is what determines compatibility. This is how Java and C# work, and it is how Python's ABCs work.

python
from abc import ABC, abstractmethod
 
class Animal(ABC):
    @abstractmethod
    def speak(self) -> str:
        pass
 
class Dog(Animal):
    def speak(self) -> str:
        return "Woof!"
 
class Cat(Animal):
    def speak(self) -> str:
        return "Meow!"
 
def make_sound(animal: Animal) -> None:
    print(animal.speak())
 
make_sound(Dog())  # Works
make_sound(Cat())  # Works

This works great if you own the classes. But what if you don't? What if someone else wrote a Bird class that has a speak() method, but never inherited from Animal?

python
class Bird:
    def speak(self) -> str:
        return "Chirp!"
 
make_sound(Bird())  # Type checker FAILS, not an Animal!

Your type checker (mypy, pyright, etc.) complains because Bird isn't nominally a subtype. Yet at runtime, Python doesn't care, it works fine. This is the gap protocols fill.

Structural subtyping asks: does this object have the methods I need? If yes, it's compatible. No inheritance required. It is a fundamentally different question. Instead of asking "what is this?", it asks "what can this do?" That shift in perspective is small on paper but enormous in practice, especially as your codebases grow and integrate with the wider Python ecosystem.

Structural vs Nominal Typing: The Deep Distinction

To really understand why protocols matter, it helps to sit with the philosophical difference between structural and nominal typing for a moment. Nominal typing is about identity, a class's name and ancestry define what it is. Structural typing is about capability, a class's methods and attributes define what it can do. These are two genuinely different models of compatibility.

In nominal typing, you are essentially stamping a class with a badge: "This class is an Animal." The badge is granted through inheritance, and type checkers verify the badge. In structural typing, you are describing a capability profile: "This function needs something that can speak." Any class with a compatible speak() method earns that compatibility automatically, no badge required. Go and TypeScript use structural typing for interfaces. Haskell's type classes are structurally resolved. Python's protocols bring this paradigm into the Python type system without abandoning the language's dynamic roots.

The practical consequence is significant for library and framework authors. When you define a nominal interface (an ABC), you are forcing your users to depend on your library just to annotate their types. Every class that wants to be compatible with your system has to import your base class and inherit from it. That creates coupling. Protocols break that coupling entirely. Your users can write completely independent classes in completely separate codebases, and as long as those classes have the right methods, the type system considers them compatible with your interface. This is the foundation of truly open, composable Python architecture.

Introducing Protocols: Structural Subtyping Done Right

A Protocol is a class that defines a set of methods and attributes. If your object has those methods/attributes with the right signatures, it's considered a subtype, without inheriting from anything.

python
from typing import Protocol
 
class Drawable(Protocol):
    def draw(self) -> str:
        ...
 
class Circle:
    def draw(self) -> str:
        return "Drawing a circle"
 
class Square:
    def draw(self) -> str:
        return "Drawing a square"
 
def render(obj: Drawable) -> None:
    print(obj.draw())
 
render(Circle())  # Type checker: PASS
render(Square())  # Type checker: PASS

Notice: no inheritance. Circle and Square don't inherit from Drawable. Yet the type checker understands they're compatible with Drawable. That's structural subtyping. The ellipsis (...) in the protocol method body is the conventional way to mark a protocol method as abstract, you can also use pass, but ... is the accepted idiom that signals "this is intentionally a stub." The type checker treats any class with a matching draw() -> str method as satisfying the Drawable protocol, regardless of where that class comes from.

The Hidden Layer: Why This Matters

Why do we need protocols if duck typing already works at runtime?

  1. Type Safety: Your IDE catches mistakes before runtime. If you pass something without a draw() method, the type checker complains.
  2. Documentation: Protocols document what methods your function expects without forcing inheritance.
  3. Flexibility: Third-party classes can be compatible without modification.
  4. IDE Support: Your editor knows what methods are available, offers autocomplete.

This is the real power: duck typing's freedom + static typing's safety. Think about what this means for large teams. When a new developer joins your project and sees a function that accepts a Drawable, they immediately understand exactly what that function needs, they can look at the Drawable protocol definition and see the full interface contract. They do not need to trace through inheritance trees or read implementation code. The protocol is self-documenting in a way that plain duck typing never can be, because plain duck typing has no artifact you can read, the interface only exists implicitly in how the code is used.

@runtime_checkable: Making Protocols Enforceable at Runtime

By default, protocols are only checked at type-check time (when mypy runs). If you want to check them at runtime with isinstance(), you use @runtime_checkable. This is useful when you need to branch your code based on what an object can do, for example, in a plugin system where you receive objects of unknown types and need to dispatch to the right handler.

python
from typing import Protocol, runtime_checkable
 
@runtime_checkable
class Drawable(Protocol):
    def draw(self) -> str:
        ...
 
class Circle:
    def draw(self) -> str:
        return "Drawing a circle"
 
class NotDrawable:
    pass
 
circle = Circle()
not_drawable = NotDrawable()
 
print(isinstance(circle, Drawable))       # True
print(isinstance(not_drawable, Drawable)) # False

The @runtime_checkable decorator registers the protocol so that isinstance() checks work at runtime, letting you guard your code paths dynamically. This is especially powerful in plugin architectures, serialization frameworks, and any system where you are accepting objects from external sources and need to verify their capabilities before using them.

Here's the catch: @runtime_checkable is shallow. It only checks if methods exist, not if their signatures are correct.

python
@runtime_checkable
class Processor(Protocol):
    def process(self, data: list[int]) -> int:
        ...
 
class BadProcessor:
    def process(self, data: str) -> str:  # Wrong signature!
        return "Processed"
 
bad = BadProcessor()
print(isinstance(bad, Processor))  # True, signature isn't checked at runtime!

The type checker (mypy) would catch this. Runtime doesn't. That's fine, protocols are primarily for static type checkers. The runtime check is a convenience for "does this object have the right method name", not a full contract verification. If you need full signature enforcement at runtime, you need ABCs or manual validation.

Runtime vs Static Protocol Checking

This distinction between runtime and static protocol checking is one of the most important things to internalize when working with protocols, because conflating the two leads to subtle bugs and false confidence. Static protocol checking happens when you run mypy, pyright, or pylance, your type checker reads the source code without executing it and verifies that types are used consistently. When mypy sees a function call where a Processor is expected, it checks the full method signature: name, parameter types, return type, everything. This is the authoritative, complete form of protocol verification.

Runtime checking, via isinstance() on a @runtime_checkable protocol, is fundamentally different. Python does not store type annotations at runtime in a way that makes full signature verification possible without executing code. What Python can check at runtime is method existence, whether the object has an attribute with the right name that is callable. The types of parameters and return values are invisible to isinstance(). This means you can have an object that passes the runtime isinstance() check but would be flagged as incompatible by mypy, because its method has the wrong parameter types or return type.

The practical guidance is this: rely on your type checker for protocol compliance verification during development, and use @runtime_checkable with isinstance() sparingly, only for coarse-grained capability checks where you need to branch at runtime. Never use isinstance() on a protocol as a substitute for proper type annotations, it gives you a weaker guarantee than the type checker would, and it can create a false sense of security. The best workflow is to let mypy catch incompatibilities during development so that by the time code reaches runtime, you already know the types are correct.

When to Use: Protocol vs ABC vs Just Duck Typing

This is the practical question: which tool do you reach for?

Use Protocols when:

  • You want to define what methods something must have
  • You don't want to force inheritance
  • You're designing library APIs (your users shouldn't need to inherit from your base class)
  • You want type checker compatibility without runtime overhead

Use ABCs (Abstract Base Classes) when:

  • You need runtime enforcement with isinstance()
  • You want to prevent instantiation of incomplete classes
  • You're building a framework where inheritance makes sense
  • You need shared implementation in the base class

Just Duck Typing when:

  • You're writing quick scripts
  • Type checking isn't a concern
  • You're prototyping

Here's a comparison that puts all three side by side to make the trade-offs concrete:

python
from abc import ABC, abstractmethod
from typing import Protocol, runtime_checkable
 
# ABC: Nominal + Runtime
class Animal(ABC):
    @abstractmethod
    def speak(self) -> str:
        pass
 
# Protocol: Structural + Type-Check Only
class Speaker(Protocol):
    def speak(self) -> str:
        ...
 
# Runtime Checkable: Structural + Runtime (Shallow)
@runtime_checkable
class Vocalizer(Protocol):
    def speak(self) -> str:
        ...
 
class Dog(Animal):  # Must inherit from Animal
    def speak(self) -> str:
        return "Woof!"
 
class Cat:  # No inheritance needed
    def speak(self) -> str:
        return "Meow!"
 
def make_sound_abc(animal: Animal) -> str:
    return animal.speak()
 
def make_sound_protocol(speaker: Speaker) -> str:
    return speaker.speak()
 
# ABC requires inheritance
make_sound_abc(Dog())  # Works
make_sound_abc(Cat())  # Type error, not an Animal
 
# Protocol works without inheritance
make_sound_protocol(Dog())  # Works
make_sound_protocol(Cat())  # Works
 
# Runtime check (shallow)
print(isinstance(Cat(), Vocalizer))  # True

The hidden truth: Choose based on your API's contract. If you're building a library, protocols let your users bring their own classes. If you're building a framework, ABCs enforce structure. The decision is not purely technical, it is also about the relationship you want to have with your users and how much control you want to retain over how your interfaces are implemented.

When Protocols Beat ABCs

There are specific scenarios where protocols are not just an option but clearly the superior choice, and understanding these scenarios will sharpen your architectural instincts. The first and most compelling scenario is third-party integration. When you are writing code that needs to work with classes from libraries you do not control, pandas DataFrames, SQLAlchemy models, Pydantic schemas, you cannot force those classes to inherit from your ABCs. But you can define a protocol that describes the interface you need, and as long as those external classes happen to implement it, your type checker will confirm compatibility without any modifications to the external code.

The second scenario is testing. Protocols make mocking dramatically cleaner. When a function accepts a DatabaseWriter protocol instead of a concrete DatabaseWriter ABC, your test can pass a simple mock object that just has the write() method implemented, no need to inherit from anything, no need to stub out every abstract method. The mock class can be a simple, purpose-built five-line class that satisfies exactly the protocol methods needed for that test. ABCs tend to make tests more verbose because you have to satisfy the full abstract interface even if your test only exercises one code path.

The third scenario is composing capabilities from multiple sources. Protocols compose naturally through multiple inheritance at the protocol level, you can define a Serializable protocol and a Comparable protocol and then accept a Serializable and Comparable by simply combining them. With ABCs, multiple inheritance creates diamond problems and MRO complexity. Protocols avoid this entirely because they carry no implementation, they are pure interface specifications.

Built-in Protocols: Sized, Iterable, Iterator, Hashable

Python's standard library has protocols for common patterns. You've probably used them without knowing. These built-in protocols are defined in typing and collections.abc, and they are the canonical way to annotate functions that need common Python capabilities.

Sized Protocol

Any class with __len__() is Sized. This is the protocol behind Python's built-in len() function, anything that works with len() satisfies Sized:

python
from typing import Sized
 
class CustomList:
    def __init__(self, items: list):
        self.items = items
 
    def __len__(self) -> int:
        return len(self.items)
 
custom = CustomList([1, 2, 3])
 
# This works because CustomList has __len__
def print_size(obj: Sized) -> None:
    print(f"Size: {len(obj)}")
 
print_size(custom)  # Works
print_size([1, 2, 3])  # Works, lists are Sized

Using Sized as a type annotation is far more expressive than using list or tuple, because it communicates your actual requirement: you need something you can measure, not something with all the other list methods. This is good API design, ask for the minimum you need.

Iterable Protocol

Any class with __iter__() is Iterable. This is the foundation of Python's for loop, list comprehensions, and anything else that iterates:

python
from typing import Iterable
 
class CustomRange:
    def __init__(self, start: int, end: int):
        self.start = start
        self.end = end
 
    def __iter__(self):
        current = self.start
        while current < self.end:
            yield current
            current += 1
 
def sum_items(items: Iterable) -> int:
    return sum(items)
 
print(sum_items(CustomRange(1, 5)))  # Works, has __iter__
print(sum_items([1, 2, 3]))           # Works, lists are Iterable

Notice that sum_items works with anything iterable, your custom range, a list, a generator, a tuple, a set. The Iterable annotation makes this generality explicit and type-checked. This is the kind of code that ages well, because it works with future types you have not even created yet.

Iterator Protocol

Iterators have both __iter__() and __next__(). An iterator is stateful, it remembers where it is in the sequence, while an iterable is just something you can iterate over:

python
from typing import Iterator
 
class CountUp:
    def __init__(self, max: int):
        self.max = max
        self.current = 0
 
    def __iter__(self):
        return self
 
    def __next__(self):
        if self.current >= self.max:
            raise StopIteration
        self.current += 1
        return self.current
 
def process_iterator(it: Iterator[int]) -> list[int]:
    return list(it)
 
print(process_iterator(CountUp(3)))  # Works

The distinction between Iterable and Iterator is subtle but important. An Iterable can produce a fresh iterator every time you call iter() on it. An Iterator is consumed as you step through it. Most of the time you want Iterable in your function signatures, it is more general and allows callers to pass lists, generators, or custom sequences. Use Iterator when you explicitly need to be able to call next() on the object.

Hashable Protocol

Any class with __hash__() is Hashable. This is required for using objects as dictionary keys or set members:

python
from typing import Hashable
 
class Point:
    def __init__(self, x: int, y: int):
        self.x = x
        self.y = y
 
    def __hash__(self) -> int:
        return hash((self.x, self.y))
 
def use_in_set(items: list[Hashable]) -> set:
    return set(items)
 
print(use_in_set([1, 2, 3]))  # Works
print(use_in_set([Point(0, 0), Point(1, 1)]))  # Works

These built-in protocols let you write generic code that works with any class that implements the required methods. No inheritance chain needed. When you annotate with these standard protocols, you are communicating your intent to any reader, and to the type checker, with precise vocabulary that every Python developer already knows.

Combining Protocols: Multiple Structural Contracts

You can combine multiple protocols, a class needs to satisfy all of them. This is one of the cleanest patterns in the protocol toolkit, and it composes perfectly because protocols have no implementation to conflict:

python
from typing import Protocol
 
class Drawable(Protocol):
    def draw(self) -> str:
        ...
 
class Resizable(Protocol):
    def resize(self, factor: float) -> None:
        ...
 
class Shape(Drawable, Resizable):
    pass
 
class Circle:
    def draw(self) -> str:
        return "●"
 
    def resize(self, factor: float) -> None:
        print(f"Resizing by {factor}")
 
def manipulate_shape(shape: Shape) -> None:
    print(shape.draw())
    shape.resize(2.0)
 
manipulate_shape(Circle())  # Works, Circle has both methods

A class is only compatible with Shape if it has both draw() and resize(). This is composable, flexible, and powerful. You can build up complex capability requirements from small, focused protocol building blocks. Each small protocol is independently useful, Drawable on its own, Resizable on its own, and the combination is precise without being rigid. This granular composition is something ABCs make much harder, because every method in an ABC is mandatory and there is no natural way to express "any subset of these capabilities."

Protocol Inheritance: Building Complex Contracts

Protocols can inherit from other protocols, building up complexity in a structured hierarchy. This is useful when you have a natural progression of capabilities, a more capable interface that requires everything from a simpler one:

python
from typing import Protocol
 
class Drawable(Protocol):
    def draw(self) -> str:
        ...
 
class Animated(Drawable):
    def animate(self, frame: int) -> str:
        ...
 
class Sprite:
    def draw(self) -> str:
        return "🎮"
 
    def animate(self, frame: int) -> str:
        return f"Frame {frame}"
 
def render_animation(obj: Animated) -> None:
    print(obj.draw())
    print(obj.animate(1))
 
render_animation(Sprite())  # Works, Sprite has both methods

This lets you define hierarchies of contracts without inheritance. An Animated object must do everything a Drawable can do, plus animation. Any class with both draw() and animate() satisfies Animated, regardless of its actual class hierarchy. You can use this pattern to build layered protocol hierarchies that match the natural capability tiers in your domain, a Readable protocol extended by Seekable extended by Buffered, for example, matching the progression of I/O capabilities.

Real-World Example: Library Design Without Forcing Inheritance

Here's where protocols shine: designing libraries that work with user-provided classes. This is the scenario that protocols were fundamentally designed for, and seeing it in action makes the value immediately clear.

Imagine you're building a data processing library:

python
from typing import Protocol, Callable
 
class Processor(Protocol):
    def process(self, data: list[float]) -> list[float]:
        ...
 
class Pipeline:
    def __init__(self, *processors: Processor):
        self.processors = processors
 
    def run(self, data: list[float]) -> list[float]:
        result = data
        for processor in self.processors:
            result = processor.process(result)
        return result
 
# User code, no inheritance needed!
class Normalizer:
    def process(self, data: list[float]) -> list[float]:
        min_val = min(data)
        max_val = max(data)
        return [(x - min_val) / (max_val - min_val) for x in data]
 
class Smoother:
    def process(self, data: list[float]) -> list[float]:
        return [sum(data[max(0, i-1):i+2]) / min(3, i+2) for i in range(len(data))]
 
# Works seamlessly!
pipeline = Pipeline(Normalizer(), Smoother())
result = pipeline.run([1.0, 2.0, 3.0, 4.0, 5.0])
print(result)

Your users write their own Normalizer and Smoother classes. They never import Processor, never inherit from anything. The type checker understands they're compatible. This is what makes protocols powerful in library design. Notice what the library author has given up: nothing. The Pipeline class is fully typed, fully checked, and works correctly. What the library author has gained: zero coupling between the library's type definitions and user code. Users do not need to add your library as a dependency just to satisfy your type annotations. This is the gold standard for extensible Python APIs.

Type Checker Behavior: How Mypy and Pyright See Protocols

Type checkers treat protocols specially. Let's see what they catch and how they communicate compatibility decisions:

python
from typing import Protocol
 
class Logger(Protocol):
    def log(self, message: str) -> None:
        ...
 
class ConsoleLogger:
    def log(self, message: str) -> None:
        print(message)
 
class FileLogger:
    def write(self, message: str) -> None:  # Wrong method name!
        pass
 
def setup_logging(logger: Logger) -> None:
    logger.log("Starting...")
 
setup_logging(ConsoleLogger())  # ✓ Type checker: PASS
setup_logging(FileLogger())     # ✗ Type checker: FAIL, no log() method

Type checkers (mypy, pyright, pylance) understand that ConsoleLogger has the required log() method, so it's compatible. FileLogger doesn't, so it's not.

This is static analysis, the type checker doesn't run your code. It just reads the source and checks signatures. When mypy reports an error on setup_logging(FileLogger()), it will tell you specifically what is missing, which method name, what the expected signature is, giving you actionable information about exactly what needs to change. This precision is far more helpful than a runtime AttributeError that you only discover when that code path is actually executed.

Common Protocol Mistakes

Protocols introduce a few failure modes that are not immediately obvious, and being aware of them upfront will save you debugging time. The most common mistake is treating @runtime_checkable as full contract verification. Developers who are new to protocols often add @runtime_checkable and then use isinstance() checks as a substitute for type annotations, assuming that if isinstance(obj, MyProtocol) returns True, the object is fully compatible. As we covered earlier, this check is shallow, it only verifies method existence, not signatures. You can have a class that passes the runtime check but would cause a TypeError when called because its parameters are wrong. Always pair @runtime_checkable with type annotations, not instead of them.

The second common mistake is defining protocol methods with implementations and expecting those implementations to be used. Unlike ABCs, protocols do not provide default implementations that subclasses inherit. If you write a method body in a protocol, it is never called, protocols are purely structural specifications. If you need default implementations, you want an ABC or a mixin class, not a protocol. The third mistake is using protocols for classes you own and control. If you are writing both the interface definition and all the implementing classes, an ABC is usually cleaner because it gives you instantiation prevention, clearer error messages when a class is incomplete, and the option for shared default implementations. Protocols earn their keep when the implementing classes are outside your control.

Covariance and Contravariance in Protocols

When protocol methods have parameters or return types, variance matters. Understanding how the type checker reasons about variance helps you write protocols that are correctly flexible without being incorrectly permissive:

python
from typing import Protocol
 
class Producer(Protocol):
    def produce(self) -> int:
        ...
 
class Consumer(Protocol):
    def consume(self, value: int) -> None:
        ...
 
class IntProducer:
    def produce(self) -> int:
        return 42
 
class NumberConsumer:
    def consume(self, value: int) -> None:
        print(f"Consumed: {value}")
 
def get_data(producer: Producer) -> int:
    return producer.produce()
 
def use_data(consumer: Consumer, value: int) -> None:
    consumer.consume(value)
 
x = get_data(IntProducer())           # ✓ Return type matches
use_data(NumberConsumer(), x)         # ✓ Parameter type matches

This is covariance (return types) and contravariance (parameter types) working as expected. The type checker ensures correctness. Return types are covariant, a method returning a subtype of the expected return type is compatible. Parameter types are contravariant, a method accepting a supertype of the expected parameter type is compatible. These rules exist because they preserve correctness: if you expect something that returns int, getting something that returns a more specific PositiveInt is safe. If you expect something that accepts int, getting something that accepts any Number is safe because int is a Number.

Practical Gotchas: What You Need to Know

Gotcha 1: Runtime Checking Is Shallow

This point bears repeating with a concrete example because the gap between what you might expect and what actually happens can cause real bugs:

python
from typing import Protocol, runtime_checkable
 
@runtime_checkable
class Worker(Protocol):
    def work(self, hours: int) -> str:
        ...
 
class Employee:
    def work(self, hours: float) -> int:  # Wrong signature!
        return hours * 20
 
emp = Employee()
print(isinstance(emp, Worker))  # True, signature not checked at runtime!

@runtime_checkable only verifies method existence, not signatures. Use it for rough type checks only. If you rely on this check to gate critical business logic, you are operating with a false sense of security.

Gotcha 2: Protocols Don't Prevent Instantiation

Unlike ABCs, you can instantiate a protocol directly, which is almost always a mistake:

python
from typing import Protocol
 
class Drawable(Protocol):
    def draw(self) -> str:
        ...
 
obj = Drawable()  # This works! Don't do it.

Unlike ABCs, you can instantiate a protocol directly. It's meaningless, don't do it. The resulting object has no draw() method, and if you try to use it as a Drawable, you will get a runtime error. Mypy will usually warn about this, but it is worth being aware of so you recognize the mistake if you see it.

Gotcha 3: Self Types in Protocols

If your protocol has methods that return Self, be careful about how you annotate them:

python
from typing import Protocol, Self
 
class Chainable(Protocol):
    def then(self, fn: callable) -> Self:
        ...
 
class Builder:
    def then(self, fn: callable) -> "Builder":
        return self
 
# Type checkers understand this correctly.

Use Self from typing (Python 3.11+) or typing_extensions for older versions. The Self type is essential for protocols that define fluent interfaces or builder patterns, where methods return the same type they were called on. Without Self, you would have to use the concrete class name, which breaks when the protocol is implemented by a subclass.

When NOT to Use Protocols

Protocols are powerful, but not always the right choice. Knowing when to reach for a different tool is just as important as knowing how protocols work:

  • Don't use for simple value objects: If you're just passing data around, regular classes are fine.
  • Don't use when inheritance makes sense: If your classes naturally form a hierarchy, use ABCs.
  • Don't use for maximum runtime safety: If you need strict runtime checks, ABCs are better.
  • Don't use in simple scripts: Protocols add complexity. For throwaway code, duck typing is faster.

Combining Protocols and ABCs: The Hybrid Approach

You can mix both patterns, getting the strengths of each in different parts of your system. This is common in larger codebases where some parts benefit from strict nominal typing and others need the flexibility of structural typing:

python
from abc import ABC, abstractmethod
from typing import Protocol
 
class Logger(ABC):
    @abstractmethod
    def log(self, message: str) -> None:
        pass
 
class FileWriter(Protocol):
    def write(self, content: str) -> None:
        ...
 
class DatabaseLogger(Logger):
    def log(self, message: str) -> None:
        self.write_to_db(message)
 
    def write_to_db(self, content: str) -> None:
        print(f"DB: {content}")
 
# This works: DatabaseLogger is a Logger (nominal)
logger: Logger = DatabaseLogger()
 
# And it also works as a FileWriter (structural)
def save_data(writer: FileWriter, content: str) -> None:
    writer.write(content)

This gives you the best of both worlds: inheritance structure and structural flexibility. Use ABCs to enforce structure within your own codebase, and use protocols to define the interfaces that external code needs to satisfy. This hybrid approach is the most pragmatic strategy for production systems that need to be both internally consistent and externally open.

Summary: Choose Your Subtyping Strategy

Protocols represent a mature, powerful tool in Python's type system that bridges the gap between the language's dynamic duck-typing heritage and the demands of modern, statically-checked codebases. They are not a replacement for ABCs or for plain duck typing, they are a third option with a distinct set of strengths that becomes increasingly valuable as your code interacts with more external systems, more third-party libraries, and more diverse teams with diverse codebases.

The key insight to carry with you is this: use the most permissive type annotation that correctly captures your function's actual requirements. If you need something that can be iterated, annotate with Iterable. If you need something that can be serialized, define a Serializable protocol with exactly the methods you call. Do not annotate with a concrete class when a protocol is more accurate, the concrete class overfits the type, making your function harder to use and your tests harder to write. Protocols are Python's way of saying "I do not care what this object is; I care what it can do." Use them when:

  • You're designing library APIs
  • You want flexibility without inheritance
  • Your users bring their own classes
  • You want type safety alongside duck typing

For production code, protocols keep your APIs open, your type checking tight, and your architecture clean. They are the tool that lets your code work with the entire Python ecosystem rather than just the classes you personally defined.

That's the power of structural subtyping.

Need help implementing this?

We build automation systems like this for clients every day.

Discuss Your Project