The art of programming is the art of organizing complexity; mastering multitude & avoiding its chaos as effectively as possible.
– Edsger W. Dijkstra
Modularization in projects is one of the key techniques that allows for maintaining code readability, scalability, and ease of maintenance. As projects evolve, especially those with a larger scope, it becomes an important element ensuring order and transparency in the code structure. However, as the project progresses and new changes are introduced, and developers come and go, it’s easy to forget the initial architecture assumptions. Automated tests come to the rescue. But what should we test and how should we do it? Without solid tests, we cannot ensure that our modular structure meets the goals we set for ourselves.
Certain aspects of modules, such as inter-module dependencies and cycles in the Java language, can be tested using ArchUnit. However, this library does not solve all problems. Let’s take a closer look at two other particularly important cases:
When we have many modules but one of them contains a significant majority of the code, modularization loses its meaning. For example, if we have 15 modules and one of them contains 80% of the code, we cannot speak of real modularization because almost all the code is contained in one module. This is a situation that needs to be eliminated through automated testing of module size.
It is also important to ensure that individual modules are small and stable in terms of the changes introduced (how often we make the module changes). For example, a “commons” module should be small and contain general functionality that is often used in various parts of the project. If this module becomes too large, it can lead to problems with managing and modifying the code. Of course, this is an approximation, as it is more important to minimize the module interface and maintain its stability in terms of changes, which does not necessarily mean minimizing the module size.
Because the issues described above are very important to me, I decided to verify them in tests. The implementation can be very simple and involve counting lines of code in packages. To my surprise, I did not find a ready-made tool that would allow me to quickly check the size of modules.
So I prepared a Java library that allows testing the size of modules - module-size-calculator. This library enables analyzing the size of modules in a project based on the number of lines of code (LOC).
Just define the dependency
<dependency>
<groupId>pl.tfij</groupId>
<artifactId>module-size-calculator</artifactId>
<version>1.0.0</version>
</dependency>
From now on, we can write tests like
ProjectSummary projectSummary = ModuleSizeCalculator.project("src/main/java")
.withModule("com.example.module1")
.withModule("com.example.module2")
.analyze()
.verifyEachModuleRelativeSizeIsSmallerThan(0.3)
.verifyModuleRelativeSizeIsSmallerThan("com.example.commons", 0.1);
The library allows for various assertions of module size, and if something has not been foreseen, a classic JUnit assertion can be written based on the generated report. The library also allows you to generate a report in the form of a Mermaid pie chart, which can then be included, for example, in documentation.
For more details and examples, visit GitHub.
Modularization is a key aspect of the project. Test it to ensure it does not fade over time.
]]>Simplicity is the goal.
– Sean Parent, Menlo Innovations
The Open-Closed Principle, described in SOLID, ensures project flexibility, but does it always lead to optimal, future-ready code? It sounds promising - open for extension, closed for modification. Let’s take a closer look.
The Open-Closed Principle seems like a reasonable approach. By designing our code to easily add new features without modifying existing code, we become prepared for unexpected changes and project expansions. However, is this always necessary? This is where the problem of overengineering arises.
Overengineering is when our code is more complicated than necessary to prepare for changes and scenarios that may never occur. It’s like building a bridge in the desert, hoping it might someday be useful as a trade corridor.
Anticipating potential, but unknown, changes can lead to excessive code abstraction. Additional layers, interfaces, and structures intended to provide flexibility introduce unnecessary complexity. Additional complexity is introduced to avoid touching individual pieces of code in the future. It is the fear of future change that drives overengineering. Fear stemming from poor-quality code.
Instead of focusing on creating code that is ready for every eventuality, it is worth concentrating on proper engineering. Rather than avoiding changes, let’s focus on creating code that is easy to change. High-quality code, clean code, automated testing, modularization, high cohesion, etc. - these are the foundations that make our code flexible without unnecessary abstractions.
Engineering should focus on creating solutions that are simple and efficient. For example, when you have one discount calculation algorithm, you don’t need to implement the strategy pattern thinking, “maybe someday we’ll have a second algorithm, and it will be easy to replace”. This is overengineering entirely consistent with the open-closed principle. If the code is of good quality, introducing abstractions when needed shouldn’t be a problem.
Handle the Open-Closed Principle with care. Code should be easy to modify, not necessarily prepared for specific, uncertain changes.
]]>The most important transformation for most organizations is to enable people and teams to do creative high-quality work. Large scale, incremental change is the key to achieving this.
– David Farey
In recent years, service architectures, especially microservices, have gained enormous popularity, yet the approach to end-to-end (E2E) testing often remains unchanged. We hear that tests verifying the operation of the entire system are crucial in the software development process, especially with distributed architectures. Statements like “We need to prove that the system works as a whole. We used to have a monolith and E2E tests; now we have independent microservices, so E2E tests are even more necessary” arise.
In this post, I use the term E2E tests to refer to tests of the entire system. These are cases where the test requires running multiple services. Therefore, for example, front-end tests using a browser don’t meet this definition if the backend is a service mock, stub, etc.
Overestimating the importance of whole-system tests is a symptom of monolithic thinking. When we need to test the system as a whole, it means that key attributes of distributed architecture, such as independence of changes and deployments, haven’t been achieved. Furthermore, deployment independence is ingrained in the definition of microservices. This statement could end the discussion. However, let’s take a closer look at the negative aspects of E2E tests in microservices. In the case of a service-oriented architecture, E2E tests not only involve issues of cost, speed, stability, and complexity, as described in the testing pyramid, but they also significantly affect workflow. Since their purpose is cross-team testing, a team of testers is often created to develop and maintain them. This solution isn’t scalable and introduces delays as work passes through multiple teams—from developers, through testers, to deployment/release. In another approach, responsibility for E2E tests can be shared by all teams. In this model, it’s common for changes in one service to cause errors in tests and block deployments of another team’s service. In both configurations, having multiple releases per day will be challenging, and their schedule will be susceptible to unpredictable delays. Development teams will lose independence and spend more time on communication. Introducing the first E2E test entails a range of problems, such as deciding who will maintain the E2E tests, how they will be run, how to ensure independence of development teams, and how to maintain deployment independence, etc.
The need for E2E tests may arise from two main reasons. The first is when we have a solid system with independent services, but the manager is stuck in mental monolith thinking. They don’t understand the concept of independent services with clear boundaries in the form of contracts. Fear of system stability and reluctance to take responsibility for errors in the event of a failure may also play a role. In such cases, the solution may be education or even changing the manager. In extreme cases, however, this may mean a cultural revolution in the company. The second reason is the state of our architecture, where elements form a distributed monolith and are strongly interconnected. In this case, it’s worth analyzing contracts between services, checking if E2E tests are not the result of abandoning strict contracts in favor of loose ones, if service APIs are consumed according to the X-as-a-service (XasS) pattern, or if consumer-driven contracts have been applied. Lack of contracts doesn’t just mean a lack of endpoint description. It can be reflected in formulations such as “the system must be run on a pre-production or UAT environment with production data for n days because we can’t predict all data cases and event combinations.”
If you decide to use E2E tests, make sure they are fast and stable, which is a challenge in itself. Additionally, introducing the first E2E test will have a significant impact on the entire system. It will be challenging to determine who will maintain the E2E tests and how they will affect the workflow of all teams. Therefore, limit their scope only to critical areas of the system and control their number.
Similarly to what I wrote in another post, Documentation is not a Requirement, and above, E2E tests are not a requirement either. Consider what risks we want to minimize and what problems we want to solve with them. How else can we approach these issues? Much depends on the context; nevertheless, ultimately, in many cases, we can do without any E2E tests.
The topic of safely deploying independent services is extensive, so I don’t intend to discuss it in full here. Below, I raise a few key issues.
Firstly, define service boundaries through API contracts. These contracts should be thoroughly tested. When designing services, it’s also worth considering which processes require testing in full and why. The choice between orchestration and choreography has an impact on testing. In other words, it’s usually easier to test a process managed by a single service because tests can be limited to that one service.
To minimize deployment-related risks, consider using practices such as blue-green deployment, canary release, or feature flags.
Ultimately, if the system needs to be tested as a whole, perhaps microservices aren’t the best choice; maybe a monolith and monorepo will work better.
Don’t get stuck in mental monolith thinking. A monolith isn’t just an architecture; it’s also a way of thinking about a problem.
]]>There’s no sense in being precise when you don’t even know what you’re talking about.
– John van Neumann
In my professional life, I’ve encountered guidelines stating that “every system must have documentation” many times. Typically, based on such formulated requirements, someone from the team would prepare a document called documentation. Often, it was a forced document that didn’t change anything in terms of the system’s usability and didn’t make anything easier.
In this post, I’d like to present my three-point approach to tasks and problems related to documentation.
The IT community often perceives documentation creation as a necessary requirement that few people want to fulfill. The most important task of documentation, i.e., achieving a goal or solving a problem, is not recognized. Without considering the goal, we prepare a document that is supposed to address all issues of all stakeholders, if it addresses any at all.
Let’s consider some example goals and problems that documentation is supposed to address:
Documentation can potentially be a solution to all these problems. However, let’s take a closer look at them, leading us to the second point.
Creating documentation often seems like a remedy for a variety of problems, but it might be worth finding and eliminating their source instead of applying a plaster in the form of an additional document.
If the architecture is complex (problem 1), instead of writing extensive documentation explaining all intricacies, it’s better to spend time simplifying the architecture to make it more understandable.
If onboarding time for employees is long (problem 2), it’s worth considering why. Is it due to poor code quality, non-compliance with standards, lack of code modularity? All these cases can be addressed differently than with documentation, while simplifying the lives of all code users.
If deploying the system by the Ops team is a challenge (problem 3), instead of preparing deployment specifications, DevOps practices can be applied. Instead of introducing exotic deployment methods, the company may have a standard that can be applied. You can consider to apply ‘you build it, you run it’ approach.
If library usage is the problem (problem 4), it’s worth working on the API. First and foremost, check if the library has a clearly defined API, if it’s easy to use, and if methods and types are unambiguous.
I wouldn’t want to be misunderstood; I’m not against the written word. Often, creating documentation is the best way to address many problems. However, it’s worth considering its form. For describing a framework that other teams will integrate with, a tutorial might be best, while a tutorial won’t work for describing that framework for developers who are developing it. Architectural Decision Records (ADRs) can be used to describe architectural decisions. Different cases require different forms of documentation. Sometimes we’ll need many different forms in a single system.
The same goes for diagrams. If a diagram isn’t understandable without someone describing it, it probably presents too many concepts at once and should be split into several simpler ones.
If someone asks you to prepare documentation, before you do anything, ask what problem we are solving and solve it in the right way.
]]>I consider tools for static code analysis extremely useful. It’s worth using them, even if they sometimes make life a bit harder. Ultimately, with their help, the code of the application is better. I’ve also noticed that many people are more receptive to feedback on code formatting from a machine than from another person, and any potential annoyance is directed towards the computer.
It turns out that it’s quite easy to add custom rules to Checkstyle. By this, I mean writing your own check and using it in your own project. Adding a check to the main library is a completely different story. It can take years from submitting the idea and presenting the Proof of concept (PoC) to the merge!
In the project I’m working on, it often happened (several times a month) that the rule for formatting method parameters was violated – whether parameters were in one line or each in a separate line. The code looked something like this:
public int fun(int a, int b,
int c) {
...
}
After a few times, when I mentioned this during code reviews, I decided to set the appropriate rule in Checkstyle. Unfortunately, it turned out that there was no such rule. From searching the internet, through GitHub issues, I came to my own library.
The library currently contains four checks related to method and constructor parameters both in their declaration and when called. Using it is simple, just add the library as a dependency to the plugin. Below is an example for Gradle Kotlin DSL:
plugins {
java
checkstyle
}
dependencies {
checkstyle("pl.tfij:check-tfij-style:1.2.1")
}
Then add the checks to the Checkstyle configuration:
<module name="MethodParameterAlignment"/>
<module name="MethodParameterLines"/>
<module name="MethodCallParameterAlignment"/>
<module name="MethodCallParameterLines">
<property name="ignoreMethods" value="Map.of"/>
</module>
Formatting errors, such as those mentioned above, are caught during the project build stage, and I no longer have to mention them during code reviews.
More details on GitHub: https://github.com/tfij/check-tfij-style
Use static code analysis. I also hope that the checks from my library will be useful to you.
]]>Most papers in computer science describe how their author learned what someone else already knew.
– Peter Landin
This post is a brief story of how good intentions can lead to disaster when forgetting about JVM’s internal mechanisms and how, once again, Kent Beck’s approach - “Make it work, Make it right, Make it fast” - came into play.
Recently, I stumbled upon a rather unreadable piece of code. Below, I present its essence:
private Map<String, Map<String, Set<String>>> map = new HashMap<>();
void put(String a, String b, String c) {
map.putIfAbsent(a, new HashMap<>());
map.get(a).putIfAbsent(b, new HashSet<>());
map.get(a).get(b).add(c);
}
boolean contains(String a, String b, String c) {
return map.getOrDefault(a, new HashMap<>())
.getOrDefault(b, new HashSet<>())
.contains(c);
}
Of course, in the original code, methods weren’t as separated, the class had a few hundred lines, and everything was much more tangled.
After several refactoring steps, I replaced Map<String, Map<String, Set<String>>>
with Set<Key>
, resulting in something like:
private Set<Key> set = new HashSet();
void put(String a, String b, String c) {
set.add(new Key(a, b, c));
}
boolean contains(String a, String b, String c) {
return set.contains(new Key(a, b, c));
}
record Key(String a, String b, String c) { }
I name variables in the example as a
, b
, c
, etc., to avoid introducing domain intricacies.
Satisfied with the results, I deployed the change. After a few minutes, I checked the service metrics, and there was a huge spike in memory usage. Luckily, the deployment wasn’t on the production environment.
It turned out that the new data structure consumed several times more memory. But that’s not all - it’s not a small collection; it stores millions of elements. The collection in the original version weighed around 400MB, while in the “improved” one, it was about 1100MB.
The increase in memory usage in this case stems from the mechanism of creating and storing Strings in the JVM. Java has several optimizations on strings. In particular, string literals go into the string pool in memory and can be reused multiple times, e.g.,
String x = "Lorem ipsum";
String y = "Lorem " + "ipsum";
System.out.println(x.equals(y)); // prints true
System.out.println(x == y); // prints true
This isn’t true for strings that aren’t literals, e.g.,
int i = 0;
String x = "Lorem ipsum" + i;
String y = "Lorem ipsum" + i/2;
System.out.println(x.equals(y)); // prints true
System.out.println(x == y); // prints false
One can force adding a string to the string pool using the intern()
method,
but this solution has its nuances and, in my opinion, can lead to errors.
This solution may also result in other memory issues.
The described behavior of strings makes the implementation with Map
significantly more memory efficient in my case.
The map holds a reference, which provides noticeable gains when there are many entries with the same key.
Additionally, in the described case, the strings were quite long - consisting of several dozen characters.
To better understand what’s happening in the JVM, consider the following example:
map.computeIfAbsent(new String("a"), x -> new HashSet<>()).put(new String("b1"));
map.computeIfAbsent(new String("a"), x -> new HashSet<>()).put(new String("b2"));
After executing the first line of code, the following strings will be created:
new String("a")
– as a key, the map holds a reference to this objectnew String("b1")
– value in the set – the set holds a reference to this object, and the map holds a reference to the set.After executing the second line of code, the following strings will be created:
new String("a")
– may be cleaned up by GC because equals
will be called on the string when adding to the map, and such a key already existsnew String("b2")
– value in the set – the set holds a reference to this object, and the map holds a reference to the set.As a result, we keep only three strings in memory - no duplicates.
For a change, let’s consider a version with a set and an aggregating object:
set.add(new Key(new String("a"), new String("b1")));
set.add(new Key(new String("a"), new String("b2")));
After executing the first line, the following strings will be created:
new String("a")
– value in the Key object – Key holds a reference to this object, and the set holds references to Keynew String("b1")
– value in the Key object – Key holds a reference to this object, and the set holds references to KeyAfter executing the second line, the following strings will be created:
new String("a")
– value in the Key object – Key holds a reference to this object, and the set holds references to Keynew String("b2")
– value in the Key object – Key holds a reference to this object, and the set holds references to KeyAs a result, we keep four strings in memory that cannot be deleted by GC – the new String("a")
instance is stored twice.
For performance reasons, I decided to stick with the map of map. However, I encapsulated the whole ugliness of a set in a map within a map into a separate class:
static class MultiDeepMap<K1, K2, V> {
private Map<K1, Map<K2, Set<V>>> map = new HashMap<>();
void put(K1 key1, K2 key2, V value) {
map.computeIfAbsent(key1, it -> new HashMap<>())
.computeIfAbsent(key2, it -> new HashSet<>())
.add(value);
}
boolean contains(String key1, String key2, String value) {
return map.getOrDefault(key1, new HashMap<>())
.getOrDefault(key2, new HashSet<>())
.contains(value);
}
}
With such an API, we have a clear way of adding and checking if something has been added to the map:
map.put("a", "b", "c");
map.contains("a", "b", "c");
To increase code readability, you can get rid of the habit of typing everything as a string, the jargon known as String Typing, by wrapping strings in classes/records and replacing:
MultiDeepMap<String, String, String>
with:
MultiDeepMap<A, B, C>
In my case, this resulted in a roughly 10% increase in memory usage for this collection. All measurements were performed on Open JDK 17.0.1.
It’s not enough to know the internal mechanisms of the JVM. You also need to remember them at the right time, especially during the daily maintenance of the code.
]]>Global state is evil until proven otherwise.
– Martin Fowler
On this blog, I sometimes touch upon taboo subjects like in post Optional as a Field and what are you going to do to me about it?. This time, it’s about static public variables. Much has already been said about the harm their usage has inflicted on the world. In this post, I’d like to analyze a specific case of their usage, namely metrics.
In the Mierz logi na zamiary (bite off more than you can measure) post by Bartek Gałek, he described how important metrics are.
Collecting metrics is also very straightforward.
In the Spring framework, all you need to do is inject MeterRegistry
…
Well, yes, just inject MeterRegistry
.
But does everything in my code have to be a Bean just to collect metrics?
If I want to collect metrics in a POJO, do I have to create a factory that will be a Bean and set the MeterRegistry
in the POJO instance?
Of course NOT! After all, if you want to log something, you don’t inject a logger (at least I haven’t encountered that). Instead, you use a static instance and log whatever and wherever you want. I assume that logs and metrics are not that distant from each other. So let’s try to apply a similar approach to metrics.
In the simplest approach, we could just create a class with a public static field holding an instance of MeterRegistry
.
We would initialize this field at the start of the application and then use it freely.
Below is a slightly more elaborate implementation, where the mutator has package visibility and the accessor is public.
public class MeterRegistryHolder {
private static MeterRegistry aMeterRegistry;
static void init(MeterRegistry meterRegistry) {
aMeterRegistry = meterRegistry;
}
public static MeterRegistry meterRegistry() {
return aMeterRegistry;
}
}
@Configuration
public class MeterRegistryHolderInitializer {
MeterRegistryHolderInitializer(MeterRegistry meterRegistry) {
MeterRegistryHolder.init(meterRegistry);
}
}
This implementation is very simple yet provides immense flexibility for adding metrics in the code.
If you need several instances of MeterRegistry
because, for example, you send metrics to different places, you can extend the MeterRegistryHolder
class to hold multiple instances.
For convenience, you can add a static import for MeterRegistryHolder.meterRegistry
.
The code will remain largely unchanged.
Instead of meterRegistry.counter()
, it will be meterRegistry().counter()
.
You can also use the Metrics class from Micrometer
.
It has a much more extensive API through which, although we don’t have access to the MetricRegistry
object, we have a range of methods for generating metric instances associated with the global MetricRegistry
.
Global variables are not without reason infamous. Below are two limitations of the discussed approach to keep in mind.
If you need to test metrics, you can achieve this by setting Spy/Mock in MeterRegistryHolder
or initialize it by SimpleMeterRegistry.
However, note that in this case, metric tests should not be run concurrently.
Also, note that in this implementation, we do not control the order in which beans are initialized and when MeterRegistryHolder
will be initialized.
Therefore, if you try to collect metrics during application context initialization, the meterRegistry
reference may be empty.
In such a situation, you can either expand the MeterRegistryHolder
(I have prepared a sample implementation on GitHub) or resort to the old proven injection.
A simple implementation of MeterRegistryHolder
can be used for everything that happens after the context is initialized.
I’m not assuming that this approach will work in all cases. In my recent projects, it worked great, providing a lot of freedom for adding metrics.
Give global variables a chance (especially when you limit their variability to the package scope).
]]>It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.
– Will Rogers
We all know how crucial good naming conventions for variables, functions, classes, and everything we work with are. One of the popular techniques of refactoring is renaming. Programmers spend a considerable amount of time brainstorming names. Therefore, one would expect every name to make sense and be correct, and when it’s not, we should be able to change it.
Unfortunately, life is cruel! Even in the Java standard library, there’s a stench. Let’s take a look at the LocalDateTime class. What is this class? The name suggests that it holds a local date. That’s the kind of response I hear in job interviews (if someone remembers what the java.time API is).
Let’s conduct a little experiment then. I have my computer set to the Polish time zone, which means the following assertion is correct.
assert ZoneId.systemDefault().equals(ZoneId.of("Europe/Warsaw"));
If LocalDateTime
stores the date in the local time zone, to get the current date in another time zone, I should be able to execute the code:
ZonedDateTime now = LocalDateTime.now().atZone(ZoneId.of("UTC")));
and the following assertion should pass:
assert now.equals(ZonedDateTime.now(Clock.system(ZoneId.of("UTC"))));
However, the assertion fails.
This is because LocalDateTime
has little to do with locality
Moreover, the same issue applies to LocalDate
.
If you see the expression
ZonedDateTime now = LocalDateTime.now().atZone(ZoneId.of("UTC")));
chances are high that it’s a bug.
I’ve fixed such bugs several times already.
They are not easy to detect, especially if creating instances of LocalDateTime
and converting to ZonedDateTime
are far apart, for example, in different files.
In fact, the atZone
method of the LocalDateTime
class only makes sense when we know the context of creating the instances – we know in which time zone the LocalDateTime
instance was created.
So, what’s happening in LocalDateTime.now().atZone(ZoneId.of("UTC")))
?
The LocalDateTime.now()
method returns the current system date and then a given time zone is added to it.
In our case, it’s UTC.
As it turns out, this has nothing to do with “now”.
The result is “now” shifted by the time zone differences between the system time zone and UTC.
This behavior shouldn’t surprise those who read the documentation.
A date-time without a time-zone in the ISO-8601 calendar system, such as
2007-12-03T10:15:30
.
But why bother reading the documentation when the name explains everything? The clue lies in the motto above. As Will Rogers said, It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.
Dear reader, I still owe you a correct code example.
Understanding how LocalDateTime
works, the expression
ZonedDateTime now = LocalDateTime.now().atZone(ZoneId.of("UTC"));
should be replaced with
ZonedDateTime now = ZonedDateTime.now(ZoneId.of("UTC"));
We now know that LocalDateTime
is a misleading, weak name.
So, how to proceed?
Perhaps a better name would be ZonelessDateTime
, analogous to ZonedDateTime
?
Alternatively, something like NoZoneDateTime
or NotZonedDateTime
.
This name has one drawback.
Generally, a class name should indicate what the object is responsible for, not what it is not.
So maybe just DateTime
?
If you can come up with a better name (or one of my suggestions fits you), and you’re using Kotlin, you can use type aliases:
typealias ZonelessDateTime = LocalDateTime
I almost forgot, if you can think of a better name, be sure to write to me.
Finally, my favorite solution: don’t use LocalDateTime
if possible.
Before we delve into patterns, let’s define the context of a distributed team, which can take various forms.
Remote work can be categorized in many ways. It can include aspects such as the span and number of time zones, native languages, countries of origin, and team cultures. All these elements are important and should be taken into account.
Communication pathways also play a crucial role. They can be categorized as follows:
Different models of remote work. From left: silos, satellites, and dust
In this article, I wanted to share my experiences and practices that have proven effective during my remote work.
Below are several practices that can be implemented for distributed teams. Some complement each other, while others are mutually exclusive. Not all of them will fit your work style. However, I’m confident you’ll find something suitable for you.
I’m probably not surprising anyone by saying that a microphone is useful for remote work. However, I would like to emphasize that it must be a good microphone. If you want meetings to run smoothly, and people not to waste energy straining to hear and instead focus on thinking, you need a decent microphone. The one built into your laptop certainly isn’t enough! It might be sufficient when you’re sitting close to the computer in a quiet room, but such conditions are hard to come by. It’s better to invest in a quality headset, and in conference rooms, in dedicated “room” solutions. Placing a Jabra on the table doesn’t solve the problem — or rather, it does, but only in small rooms with a maximum of 4 people. It’s worth considering how many people the room is intended for and provide the appropriate setup.
This practice is particularly applicable to the satellite model. In a meeting where most people are in a conference room and individual satellites are dialed in, it seems quite obvious that equality can be forgotten. It’s difficult for people outside the conference room to speak up and get others’ attention. In cases where people don’t know each other well yet, the issue of “who said that” arises. If not everyone is visible on camera, this problem will occur 100%.
Instead of meeting in a conference room, let everyone stay at their desks, sit with coffee, or find some other secluded place. This approach teaches empathy and reduces the risk of exclusion. It also forces everyone to develop better communication methods.
Pairing can take two forms. The first, minimalist form is when each satellite person is assigned someone from the conference room as a pair. Their task is to ensure that the satellite person’s voice is heard.
When we combine this rule with the “no conference rooms” rule, we get the second form. Grouping everyone into pairs has the advantage that two people sitting at one computer find it easier to stay focused. No one will start checking email while the other person is sitting next to them. It’s also important to ensure that pairs are not permanent, but individuals exchange partners from meeting to meeting.
Some people often turn off their camera when joining a meeting. This happens particularly often when people work from home. Don’t do this! Meetings run smoother when participants can see each other. Body language is the fastest form of communication and also the most universal. There’s a reason why professional poker players hide their faces. Our communication consists not only of words and tone of voice but also of facial expressions, gestures, posture, and even eye movements. It’s a shame to give up this additional bit of information.
If you don’t take notes during meetings, which are then shared with other participants, start doing so regardless of whether you work in a distributed model or not.
If you work in a distributed model, consider taking notes during meetings and sharing them online as they are being created (e.g., via screen sharing). This way, everyone can add something, supplement, or correct. The greatest value comes when meeting participants don’t know each other well.
Maintaining focus in a distributed model is more challenging than when everyone meets in one place. Try to ensure that meetings don’t last longer than one hour, have a specific agenda, and stick to the meeting schedule. It’s advisable to stick to one topic according to the Single Point Agenda aka Single Point Meeting term. Even Google noticed that lengthy meetings serve no purpose and added the option for quick meetings to Google Calendar.
Prefer asynchronous communication over synchronous (real-time), especially when the team works in different time zones.
Asynchronous communication allows people to focus on their tasks. If my problem or question doesn’t require an immediate response, I can write an email that will be read at a convenient time instead of distracting others from their tasks with synchronous communication. An important aspect is to establish standards/contracts so that everyone in the team has the same expectations and there are no misunderstandings, e.g. we respond to e-mails within 24 hours.
Most of the presented practices relate to meeting organization. This advice concerns the organization of the entire workday. If the whole team works in the same time zone, turn on one video channel where everyone is present throughout the day. It’s somewhat like a virtual room. If someone wants to speak up, they can do so just as they would in a physical room.
Try to limit informal communication, especially in the satellite model. It’s not good when some team members can’t participate in decision-making or aren’t informed about certain arrangements just because they couldn’t go out for lunch with part of the team. Even if decisions are made during coffee meetings, always take notes and share them with the rest of the team. A simple email is enough to inform about decisions made. If you have a company Wiki or other shared documents, it’s worth using them for this purpose. This way, you can always check and see why a particular decision was made and by whom. Of course, this shouldn’t replace in-depth analyses and so-called “design docks”, where all pros and cons and the entire decision-making process are analyzed. Even for seemingly trivial decisions, it’s worth keeping the reasons for making them in one place.
This rule is worth implementing when the team is spread across different time zones or when its members work at different hours. Otherwise, it’s better not to introduce it. The rule is simple. Let’s assume that if person A’s time is 10:00 am, then person B’s is 4:00 pm, and person C’s is 11:00 pm. Organize meetings so that one time it’s at 10:00 am for A (others adjust), the next time at 10:00 am for B, and the next time at 10:00 am for C.
Remember also that when you cancel a meeting a few hours before it starts, not everyone will find out in time. It will be particularly harsh if someone gets up at 3am specifically for that meeting, takes a shower, eats breakfast, turns on the computer, and reads that the meeting has been canceled. Believe me, it’s really frustrating!
There are many online tools that allow you to simulate a classic whiteboard or flipchart. You can write, draw, stick notes on them. It’s important that even if only one meeting participant is remote, they have access to the whiteboard. I’ve had a graphics tablet for some time now and I must admit that after practice, it works better than a physical board with markers.
For large meetings in a multi-office model, it’s worth having an assistant facilitator in each location to help participants work through the agenda smoothly.
Working in distributed teams can be and often is very efficient. However, nothing can replace real interpersonal relationships that can only be built during face-to-face meetings. Such meetings can take place once a month or once a quarter. It’s important that they don’t happen less frequently than once a year.
I hope these few basic tips will make your remote work easier and more enjoyable. Unfortunately, there’s no one-size-fits-all solution, and each team has to develop its own communication methods, whether they work remotely or not.
]]>Watch it here: https://www.youtube.com/watch?v=2CVJuPtlNVU
Note: The presentation is in Polish.
Enjoy the insights and feel free to share your feedback in the comments!
For further insights and concepts discussed in the talk, check out my Micro-monolith anti-pattern article.
]]>