Greycastle Logo

Interface segregation but what about our objects?

Interface segregation but what about our objects?

The I in SOLID stands for ‘Interface Segregation’. This is a fancy way of saying “splitting apart dependencies”.

Basically, an object should not have to depend on more than it absolutely has too. If it does, it may have to be modified when the dependency changes, even if the given dependency has no relation to what the object is using from the dependency.

The full extent of the principle can be found here and can easily be found through googling it so I won’t describe further what it is.

What I wanted to dive into today is why we’re not better at applying the same thing to our models.

I often see models growing out of control in our projects meaning that any time we need to use the object, map it or so, it takes a lot of energy and very easily breaks interfaces.

Problem

  1. We have a model for a person, a person has a name and an age
  2. Next iteration we need to add an address, everyone needs to live somewhere
  3. Go forward a few steps and the person now has a membership number, a family member counter, a total purchase history counter and the list goes on

Most of these properties doesn’t have anything to do with a person. When we begin writing software I believe we are seldom fully aware of what our model really is meant to represent and we very seldom go back to fix the issue. So in the end we end up with these huge models (and in some case storage tables) spanning ten to fifteen to hundreds of properties. A lot may even be nullable meaning they’re not always belonging to the object in the first place.

 To solve this

It’s pretty easy to begin with.

Anything that certainly belongs to the object and is used together at any point the object is used, goes in the same model.

So, looking at our person example, we can split it into different models.

A person has a

  • Name
  • A unique identifier
  • A last known address

Membership is a model that

  • Has a person reference
  • Has a unique membership id
  • Has a membership start date

A family model has

  • A person reference
  • A unique identifier
  • A nullable second person reference
  • A person type identifier (child, partner, parent)

A purchase model has

  • A person reference
  • A unique identifier
  • A purchase date * A (denormalised) item counter

Benefit

By doing this we can for example,

  • Cache some models if needed for performance
  • Easily extend properties to some models and update their usage only where needed
  • Easily remove models if the go out of usage

Simply, we’ve reduced the range of impact when changing things drastically.

Is this alright?

The problem with this is that some times, we simply need that huge model.

We want to get all our data in one chunk and save us the performance overhead of doing 4 calls to an API or database where we could manage with just 1.

This varies very much case-by-case but I think we can:

  1. Split the models as much as possible
  2. Use the smaller models where possible, maybe parts of the UI can be loaded independently
  3. Combine them again in a mashup if needed
  4. Where possible, keep the mashup as close to the client as possible (which means any changes can be done closer to the place we actually need to change)
  5. If required, create the mashup closer to the source of data but still trying to reuse the smaller independent models and the same logic for assembling them

Conclusion

Though I’m sure this is way easier said than done I’m also sure it’s not impossible, even though it may be hard. Bit by bit the models should be splittable, even if the original data source isn’t normalised, it may still be normalised at the API level. When the API is responsible for putting together the models the underlying data source can be updated to follow in due time.

© 2024 Greycastle