With ESG and sustainability becoming an increasingly important topic for startups and investors alike, we’re taking a look at optimizing architecture. With a few changes to your architecture it’s possible to improve your sustainability as well as cost efficiency. We’re working closely with our portfolio startups and cloud providers – notably AWS – to facilitate this transition and have developed 5 key strategies to adopt when thinking about optimizing your architecture for sustainability and cost efficiency:
- Elasticity and alignment to demand – Responding dynamically to demand can significantly reduce your cost and energy footprint. A key way to do to this is automatically scaling production based on demand and usage, and reducing dev and test costs via scheduling by automatically turning off non-production resources during off-hours (e.g. nights and weekends). In addition, adopting a serverless, event-driven architecture can further help to improve cost-efficiency and sustainability. Lastly, check with your cloud provider for recommendations they may have – many providers, especially the larger ones such as AWS, Azure, etc. have many tooling options available to help in this area.
- Right-sizing your software and architecture patterns – By building your software with sustainability and cost efficiency in mind, you can set a solid foundation for strong performance in this area. This requires some strategic choices, especially when considering the impact of ‘heavy duty’ activities such as ML models: do you really need a model? Do you really need to train your own model? What are the tradeoffs the business can make, and how will they affect the customer and the backend? A few practical considerations here are tradeoffs between asynchronous and scheduled jobs, using transient resources or clusters that are aligned with job schedules, and crucially, optimizing code and removing or refactoring it if it’s low or no use. Depending on the software and architecture, selecting the cheapest instance while meeting performance needs (for example identifying CPU, RAM, storage and network utilization that can be downsized) can help to optimize costs as well.
- Choose the right cloud – Cloud providers ensure the sustainability of the cloud – depending on the provider and the region there will be significant differences in energy efficiency and cost. The region of the data center can play a key role here, depending on electricity supply, energy mix, waste, and water policy etc. Some regions, for example Europe, are more energy efficient than others such as the US. Therefore, having the right geolocation for your data centers can considerably reduce the energy footprint.
- Measure, monitor, and improve – Part of optimizing architecture for sustainability and cost efficiency is understanding where you stand today to determine where you can make changes for tomorrow. Optimizing is an ongoing activity as the software evolves to ensure that you always have clear visibility into your consumption, how it’s evolving, and how it can change. In order to do so, it is essential to measure and monitor your performance consistently and on an ongoing basis. It is also important to create the link between tech and finance – be enabling visibility for the CFO into the financial aspects of the tech organization, there is the opportunity to create greater transparency and collaboration across functions and ultimately go further together in reducing the energy footprint and saving costs.
- Optimize, optimize, optimize – Once you have determined the right architecture, the right cloud, and are aligned to demand it’s time to ruthlessly and continually optimize. As with #2 “Right-sizing your software and architecture”, this requires strategic consideration of tradeoffs and priorities that will have an impact across the entire product and organization. For example, how is data being stored, processed, and used? Do you need large amounts (or all) of your data in real-time, or can you develop a lean structure that is synchronized with your needs? Can you leverage different data tiering and storage policies based on the frequency with which data is being accessed? Understanding where your needs fall in the range from ‘hot’ data stored within milliseconds of access to ‘cold’ data that only requires infrequent archive retrieval is a critical distinction for both cost and energy. Furthermore, can you optimize processing clusters, file formats, and compression to create further efficiencies? In general, the key is to align your software’s needs as closely as possible with the activities you engage in. This will enable you to save on costs and reduce the energy footprint simultaneously.
Overall, some things to keep in mind here are maximizing utilization and CPU efficiency via software design, and eliminating or minimizing idle resources, data processing, and data storage. Some ‘quick wins’ may also include introducing a shared file system, auto-scaling for processing nodes, moving to serverless processing, and reducing log verbosity and retention if that’s appropriate for your organization.