Statically vs dynamically typed programming languages. This controversial topic is old as the world but still pops up on almost every developer gathering. Everyone has a preference and everyone is zealously defending their side. Let me tell you how I see things in this controversial matter.
I am not going to explain the basics of what a type system is and what it means a programming language to be statically typed or dynamically typed. I assume the reader knows the basics. I am going to focus on why you should prefer one over the other and when.
Experience on both sides
First things first. To fend off an unbiased position on this battlefield you need to have considerable experience in both worlds - static and dynamic. If you come from a statically typed programming background, the idea of relying on dynamic typing can make you feel uncomfortable. This discomfort usually roots from a lack of experience or from a belief that static typing is more reliable. On the other hand, if you have been working with dynamic languages throughout your professional career, having to declare each type before you use it may feel like too much boilerplate that clutters your code and makes you walk instead of run when it comes to effectiveness. The bottom line is that you need a lot of experience on both sides before you can judge objectively. Otherwise, all the arguments that you throw in against your friends or colleagues from the other side remain simply a personal preference founded on a well-established habit. I’ve had my share of battles on this eternal battleground and most of the time the arguments I hear are either too subjective or standing on the grounds of false assumptions. Without further ado, let’s dive into this benefit-cost analysis paralysis.
Values and Costs
Static and dynamic typing both make promises and each has values and costs.
Static typing values
- The compiler saves you from type errors.
This statement sounds quite alarming. Ultimately it assumes that runtime type errors will occur unless the compiler performs type checks. I guess that should be mostly true if you are a bunch of cowboys who write some code and directly push it to production. No automated tests, no manual testing. Next comes casting. The moment you start casting all bets are off. The compiler excuses itself and you are left on your own. I guess I could never stress enough on:
The code is only as good as your tests.
Developers from a statically typed background may find comfort in the notion that static typing provides safety but in reality that’s an illusion. But leave that argument aside, the bottom line is that no one needs saving. Compiler preventable errors never were a major concern. Think back and remember how many times you fixed a bug on production that was caused by a runtime type error. Not many I guess. Now, remember how many times you have fixed an NPE error on production. Many I guess. NPE errors are just an example to show that the real errors you should be worried about are equally likely to happen in both type systems.
- Type information serves as documentation.
This argument is quite subjective. It lies on the premise that programmers cannot infer types from the context so they won’t understand code without types. If you are used to seeing type declarations in your code, you may find them useful. If you are used to dynamic typing, you find type declaration verbose and distracting. It all comes down to habits and experience on this one. If you are coming from a statically typed language, like Java or C#, you may feel insecure by the lack of type declarations in Ruby or Python. But once you gain enough experience working with dynamic languages, you will find this less verbose syntax easier to read, write, and understand.
- Type information helps the tooling ecosystem.
Whether you write TypeScript in Visual Studio Code or you write Java using Eclipse IDE, the user experience is quite smooth when your editor can rely on type information. You have a nice auto-complete. You can rename with confidence. You may achieve nearly the same experience when working with a dynamic language, for example using ctags and writing grepable code, but I have to agree that it’s not as slick as the tooling built around static type systems.
- The compiled code is optimized to run quickly.
This argument represents a quality only if you accept the following assumption - the application runs slowly without these optimizations. For certain applications well-crafted statically typed code will outperform its well-crafted dynamically typed twin. If your case falls into this category you have no other choice. If you must, you must. But in the vast world of web development, the application is not running slowly because of a poor choice of programming language. The application runs slowly because of developer mistakes, wrong decisions about infrastructure, bad design of services, etc, etc. It is a never-ending list that has nothing to do with static or dynamic typing. The root cause for most performance problems we face every day is developers who cannot work with a database efficiently, especially those relying too much on an ORM library. The point is: if the application is slow the chance that the problem lies in the programming language is minimal.
Dynamic typing values
- Faster development cycles.
The code is interpreted and dynamically loaded without a compile cycle. Developers used to static typing strongly believe that the benefits of a compiler guarding them against runtime type errors is a necessity. They trade off greater programming efficiency for having that mighty guardian on their side. That is a solid argument only if you assume that without the compiler these type errors will occur and the compiler is the only one who can save you from them. To put it in other words - the time spent chasing and fixing type errors is greater than the time lost in overall application development. I was once a believer in this gospel. But then removed the compiler and gained productivity. It’s a deal I am willing to take at every chance.
- Less boilerplate code.
Source code does not include explicit type information. Dynamic languages are more succinct than their statically typed counterparts. We already discussed this premise above from the opposite point of view when pointing out the static typing values. The bottom line is that this is quite subjective. Programmers used to dynamic typing find the code easier to understand when it does not contain type declarations. They can infer an object’s type from its context. Programmers used to static typing feel just the opposite.
- Metaprogramming is easier.
Metaprogramming, or writing code that writes code, is a double-edged sword. It could help the handy craftsman write code that reads like prose, but it could lead to quite a mess when utilized by less experienced programmers. If you have ever solved a complex problem with a simple DSL, metaprogramming becomes a must-have feature. But if you have ever chased an elusive bug hidden deep down obscure metaprogramming, you become opponent for life claiming it as the ultimate gun to shoot yourself in the foot or inflict great pain upon your colleagues. Metaprogramming is a scalpel. Dangerous in the wrong hands, life-saving when used properly. A great tool that requires greater responsibility. Used carefully it has a great value and stands as a strong argument in favor of dynamic typing.
Depend on behavior, not on type
Once you begin to treat your objects as if they are defined by their behavior rather than by their class, you enter into a new realm of expressive flexible design. - Sandi Metz
Costs of concretion and costs of abstraction
Let’s go back to the ultimate goal we constantly chase no matter what language or framework or tools we use - we want to work effectively. In order to work effectively, we need to be able to reduce the cost of change. Concretion and abstraction both bring their own bags of costs. You have to understand them well before taking a side or making a choice. Concrete code is harder to extend but easier to follow having types that serve as documentation. Abstract code may seem harder to read to the untrained eye but is far easier to change. Once you develop this ability to tolerate ambiguity about the class of an object you are set right on the road to design abstractions with confidence and no fear. You stop worrying about the inner details of your classes and start envisioning your objects as abstract entities who interact through public interfaces. But wait, can we have both - abstract objects with well-documented interfaces. Sure, simply write some tests. Good tests are the best code documentation any team can wish for. I’ve seen many Java developers neglecting tests as they rely too much on the static type checker. Don’t be that guy! Types cannot document your code as good descriptive tests can. Types “protect” you from a very limited set of runtime errors. Countless production systems have been broken after the phrase: “If it compiles, it works”.
Faith in your compiler misplaced maybe, as is your faith in the statically typed languages.
Reduce coupling with dependency inversion
The Dependency Inversion Principle, the last of the SOLID acronym, states that high-level modules should not depend on low-level modules. Both should depend on abstractions. And abstractions should not depend on implementation details. It is a way of achieving Inversion of Control.
While you’ll have to write a good deal of boilerplate code to define all the interfaces you need to decouple your abstractions in a static language, for dynamic languages that is the default. You don’t have to define any explicit interfaces. The client simply sends a message to the object having no idea about its concrete implementation. In a way, you are writing loosely coupled code by simply using a dynamic language without applying any conscious effort. That is why dependency injection pattern got so popular in languages with static typing, like Java, and is barely mentioned in interpreted languages with dynamic typing, like Python. To leverage dependency injection in Java you will need a complex framework like Spring, while in Python dependency injection could be implemented easily using language fundamentals. Duck typing is the most obvious way to achieve loose coupling by following the Dependency Inversion Principle.
Reduce the cost of change with duck typing
If it walks like a duck and it quacks like a duck, then it must be a duck. Methods that cannot behave correctly unless they know the classes of their arguments make code less flexible and harder to change when new classes appear and existing classes change. The more you depend on a class implementation the less flexible your code is. When the dependent class change, you must change. But fear not, duck typing provides a way out.
If the object acts like a duck then its class is irrelevant. Duck types are public interfaces that are not tied to a specific class implementation. Ducks are objects defined by their behavior rather than by their class. The expectations about the behavior of an object define its public interface.
These abstract interfaces make your code more flexible by replacing the costly dependency on a concrete class with a more forgiving dependency on a message. You don’t care what the underlying type is as long as the object can handle the message you’ve sent to it. Duck typing makes your code more abstract and less concrete, making it easier to extend but hiding the specific class behind the duck.
Dynamic typing allows you to trade compile time type checking, a serious restriction that has a high cost and provides little value, for the huge gains in efficiency provided by removing the compile cycle. This trade is a bargain. Take it.
- Practical Object-Oriented Design, by Sandi Metz
- Static Typing Where Possible, Dynamic Typing When Needed, by Erik Meijer and Peter Drayton at Microsoft