Migrate Legacy Data Without Screwing It Up

Migrating legacy data is hard.

Many organizations have old and complex on-premise business CRM systems. Today, there are plenty of cloud SaaS alternatives, which come with many benefits; pay as you go and to pay only for what you use. Therefore, they decide to move to the new systems.

Nobody wants to leave valuable data about customers in the old system and start with the empty new system, so we need to migrate this data. Unfortunately, data migration is not an easy task, as around 50 percent of deployment effort is consumed by data migration activities. According to Gartner, Salesforce is the leader of cloud CRM solutions. Therefore, data migration is a major topic for Salesforce deployment.

10 Tips For Successful Legacy Data Migration To Salesforce

How to ensure successful transition of legacy data into a new system
while preserving all history.

So, how can we ensure a successful transition of legacy data into a shiny new system and ensure we will preserve all of its history? In this article, I provide 10 tips for successful data migration. The first five tips apply to any data migration, regardless of the technology used.

Data Migration in General

1. Make Migration a Separate Project

In the software deployment checklist, data migration is not an “export and import” item handled by a clever “push one button” data migration tool that has predefined mapping for target systems.

Data migration is a complex activity, deserving a separate project, plan, approach, budget, and team. An entity level scope and plan must be created at the project’s beginning, ensuring no surprises, such as “Oh, we forgot to load those clients’ visit reports, who will do that?” two weeks before the deadline.

The data migration approach defines whether we will load the data in one go (also known as the big bang), or whether we will load small batches every week.

This is not an easy decision, though. The approach must be agreed upon and communicated to all business and technical stakeholders so that everybody is aware of when and what data will appear in the new system. This applies to system outages too.

2. Estimate Realistically

Do not underestimate the complexity of the data migration. Many time-consuming tasks accompany this process, which may be invisible at the project’s beginning.

For example, loading specific data sets for training purposes with a bunch of realistic data, but with sensitive items obfuscated, so that training activities do not generate email notifications to clients.

The basic factor for estimation is the number of fields to be transferred from a source system to a target system.

Some amount of time is needed in different stages of the project for every field, including understanding the field, mapping the source field to the target field, configuring or building transformations, performing tests, measuring data quality for the field, and so on.

Using clever tools, such as Jitterbit, Informatica Cloud Data Wizard, Starfish ETL, Midas, and the like, can reduce this time, especially in the build phase.

In particular, understanding the source data – the most crucial task in any migration project – cannot be automated by tools, but requires analysts to take time going through the list of fields one by one.

The simplest estimate of the overall effort is one man-day for every field transferred from the legacy system.

An exception is data replication between the same source and target schemas without further transformation – sometimes known as 1:1 migration – where we can base the estimate on the number of tables to copy.

A detailed estimate is an art of its own.

3. Check Data Quality

Do not overestimate the quality of source data, even if no data quality issues are reported from the legacy systems.

New systems have new rules, which may be violated with legacy data. Here’s a simple example. Contact email can be mandatory in the new system, but a 20-year-old legacy system may have a different point of view.

There can be mines hidden in historical data that have not been touched for a long time but could activate when transferring to the new system. For example, old data using European currencies that do not exist anymore need to be converted to Euros, otherwise, currencies must be added to the new system.

Data quality significantly influences effort, and the simple rule is: The further we go in history, the bigger mess we will discover. Thus, it is vital to decide early on how much history we want to transfer into the new system.

4. Engage Business People

Business people are the only ones who truly understand the data and who can therefore decide what data can be thrown away and what data to keep.

It is important to have somebody from the business team involved during the mapping exercise, and for future backtracking, it is useful to record mapping decisions and the reasons for them.

Since a picture is worth more than a thousand words, load a test batch into the new system, and let the business team play with it.

Even if data migration mapping is reviewed and approved by the business team, surprises can appear once the data shows up in the new system’s UI.

“Oh, now I see, we have to change it a bit,” becomes a common phrase.

Failing to engage subject matter experts, who are usually very busy people, is the most common cause of problems after a new system goes live.

5. Aim for Automated Migration Solution

Data migration is often viewed as a one-time activity, and developers tend to end up with solutions full of manual actions hoping to execute them only once. But there are many reasons to avoid such an approach.

  • If migration is split into multiple waves, we have to repeat the same actions multiple times.
  • Typically, there are at least three migration runs for every wave: a dry run to test the performance and functionality of data migration, a full data validation load to test the entire data set and to perform business tests, and of course, production load. The number of runs increases with poor data quality. Improving data quality is an iterative process, so we need several iterations to reach the desired success ratio.

Thus, even if migration is one-time activity by nature, having manual actions can significantly slow down your operations.

Salesforce Data Migration

Next we will cover five tips for a successful Salesforce migration. Keep in mind, these tips are likely applicable to other cloud solutions as well.

6. Prepare for Lengthy Loads

Performance is one of, if not the biggest, tradeoff when moving from an on-premise to a cloud solution – Salesforce not excluded.

On-premise systems usually allow for direct data load into an underlying database, and with good hardware, we can easily reach millions of records per hour.

But, not in the cloud. In the cloud, we are heavily limited by several factors.

  • Network latency – Data is transferred via the internet.
  • Salesforce application layer – Data is moved through a thick API multitenancy layer until they land in their Oracle databases.
  • Custom code in Salesforce – Custom validations, triggers, workflows, duplication detection rules, and so on – many of which disable parallel or bulk loads.

As a result, load performance can be thousands of accounts per hour.

It can be less, or it can be more, depending on things, such as the number of fields, validations and triggers. But it is several grades slower than a direct database load.

Performance degradation, which is dependent on the volume of the data in Salesforce, must also be considered.

It is caused by indexes in the underlying RDBMS (Oracle) used for checking foreign keys, unique fields, and evaluation of duplication rules. The basic formula is approximately 50 percent slowdown for every grade of 10, caused by O(logN) the time complexity portion in sort and B-tree operations.

Moreover, Salesforce has many resource usage limits.

One of them is the Bulk API limit set to 5,000 batches in 24-hour rolling windows, with the maximum of 10,000 records in each batch.

So, the theoretical maximum is 50 million records loaded in 24 hours.

In a real project, the maximum is much lower due to limited batch size when using, for example, custom triggers.

This has a strong impact on the data migration approach.

Even for medium-sized datasets (from 100,000 to 1 million accounts), the big bang approach is out of the question, so we must split data into smaller migration waves.

This, of course, impacts the entire deployment process and increases the migration complexity because we will be adding data increments into a system already populated by previous migrations and data entered by users.

We must also consider this existing data in the migration transformations and validations.

Further, lengthy loads can mean we cannot perform migrations during a system outage.

If all users are located in one country, we can leverage an eight-hour outage during the night.

But for a company, such as Coca-Cola, with operations all over the world, that is not possible. Once we have U.S., Japan, and Europe in the system, we span all time zones, so Saturday is the only outage option that doesn’t affect users.

And that may not be enough, so, we must load while online, when users are working with the system.

7. Respect Migration Needs in Application Development

Application components, such as validations and triggers, should be able to handle data migration activities. Hard disablement of validations at the time of the migration load is not an option if the system must be online. Instead, we have to implement different logic in validations for changes performed by a data migration user.

  • Date fields should not be compared to the actual system date because that would disable the loading of historical data. For example, validation must allow entering a past account start date for migrated data.
  • Mandatory fields, which may not be populated with historical data, must be implemented as non-mandatory, but with validation sensitive to the user, thus allowing empty values for data coming from the migration, but rejecting empty values coming from regular users via the GUI.
  • Triggers, especially those sending new records to the integration, must be able to be switched on/off for the data migration user in order to prevent flooding the integration with migrated data.

Another trick is using field Legacy ID or Migration ID in every migrated object. There are two reasons for this. The first is obvious: To keep the ID from the old system for backtracking; after the data is in the new system, people may still want to search their accounts using the old IDs, found in places as emails, documents, and bug-tracking systems. Bad habit? Maybe. But users will thank you if you preserve their old IDs. The second reason is technical and comes from the fact Salesforce does not accept explicitly provided IDs for new records (unlike Microsoft Dynamics) but generates them during the load. The problem arises when we want to load child objects because we have to assign them IDs of the parent objects. Since we will know those IDs only after loading, this is a futile exercise.

Let’s use Accounts and their Contacts, for example:

  1. Generate data for Accounts.
  2. Load Accounts into Salesforce, and receive generated IDs.
  3. Incorporate new Account IDs in Contact data.
  4. Generate data for Contacts.
  5. Load Contacts in Salesforce.

We can do this more simply by loading Accounts with their Legacy IDs stored in a special external field. This field can be used as a parent reference, so when loading Contacts, we simply use the Account Legacy ID as a pointer to the parent Account:

  1. Generate data for Accounts, including Legacy ID.
  2. Generate data for Contacts, including Account Legacy ID.
  3. Load Accounts into Salesforce.
  4. Load Contacts in Salesforce, using Account Legacy ID as parent reference.

The nice thing here is that we have separated a generation and a loading phase, which allows for better parallelism, decrease outage time, and so on.

8. Be Aware of Salesforce Specific Features

Like any system, Salesforce has plenty of tricky parts of which we should be aware in order to avoid unpleasant surprises during data migration. Here are handful of examples:

  • Some changes on active Users automatically generate email notifications to user emails. Thus, if we want to play with user data, we need to deactivate users first and activate after changes are completed. In test environments, we scramble user emails so that notifications are not fired at all. Since active users consume costly licenses, we are not able to have all users active in all test environments. We have to manage subsets of active users, for example, to activate just those in a training environment.
  • Inactive users, for some standard objects such as Account or Case, can be assigned only after granting the system permission “Update Records with Inactive Owners,” but they can be assigned, for example, to Contacts and all custom objects.
  • When Contact is deactivated, all opt out fields are silently turned on.
  • When loading a duplicate Account Team Member or Account Share object, the existing record is silently overwritten. However, when loading a duplicate Opportunity Partner, the record is simply added resulting in a duplicate.
  • System fields, such as Created DateCreated By IDLast Modified DateLast Modified By ID, can be explicitly written only after granting a new system permission “Set Audit Fields upon Record Creation.”
  • History-of-field value changes cannot be migrated at all.
  • Owners of knowledge articles cannot be specified during the load but can be updated later.
  • The tricky part is the storing of content (documents, attachments) into Salesforce. There are multiple ways to do it (using Attachments, Files, Feed attachments, Documents), and each way has its pros and cons, including different file size limits.
  • Picklist fields force users to select one of the allowed values, for example, a type of account. But when loading data using Salesforce API (or any tool built upon it, such as Apex Data Loader or Informatica Salesforce connector), any value will pass.

The list goes on, but the bottom line is: Get familiar with the system, and learn what it can do and what it cannot do before you make assumptions. Do not assume standard behavior, especially for core objects. Always research and test.

9. Do Not Use Salesforce as a Data Migration Platform

It is very tempting to use Salesforce as a platform for building a data migration solution, especially for Salesforce developers. It is the same technology for the data migration solution as for the Salesforce application customization, the same GUI, the same Apex programming language, the same infrastructure. Salesforce has objects which can act as tables, and a kind of SQL language, Salesforce Object Query Language (SOQL). However, please do not use it; it would be a fundamental architectural flaw.

Salesforce is an excellent SaaS application with a lot of nice features, such as advanced collaboration and rich customization, but mass processing of data is not one of them. The three most significant reasons are:

  • Performance – Processing of data in Salesforce is several grades slower than in RDBMS.
  • Lack of analytical features – Salesforce SOQL does not support complex queries and analytical functions that must be supported by Apex language, and would degrade performance even more.
  • Architecture* – Putting a data migration platform inside a specific Salesforce environment is not very convenient. Usually, we have multiple environments for specific purposes, often created ad hoc so that we can put a lot of time on code synchronization. Plus, you would also be relying on connectivity and availability of that specific Salesforce environment.

Instead, build a data migration solution in a separate instance (it could be a cloud or on-premise) using an RDBMS or ETL platform. Connect it with source systems and target the Salesforce environments you want, move the data you need into your staging area and process it there. This will allow you to:

  • Leverage the full power and capabilities of the SQL language or ETL features.
  • Have all code and data in one place so that you can run analyses across all systems.
    • For example, you can combine the newest configuration from the most up-to-date test Salesforce environment with real data from the production Salesforce environment.
  • You are not so dependent upon the technology of the source and target systems and you can reuse your solution for the next project.

10. Oversight Salesforce Metadata

At the project beginning, we usually grab a list of Salesforce fields and start the mapping exercise. During the project, it often happens that new fields are added by the application development team into Salesforce, or that some field properties are changed. We can ask the application team to notify the data migration team about every data model change, but doesn’t always work. To be safe, we need to have all data model changes under supervision.

A common way to do this is to download, on a regular basis, migration-relevant metadata from Salesforce into some metadata repository. Once we have this, we can not only detect changes in the data model, but we can also compare data models of two Salesforce environments.

What metadata to download:

  • A list of objects with their labels and technical names and attributes such as creatable or updatable.
  • A list of fields with their attributes (better to get all of them).
  • A list of picklist values for picklist fields. We will need them to map or validate input data for correct values.
  • A list of validations, to make sure new validations are not creating problems for migrated data.

How to download metadata from Salesforce? Well, there is no standard metadata method, but there are multiple options:

  • Generate Enterprise WSDL – In the Salesforce web application navigate to the Setup / API menu and download strongly typed Enterprise WSDL, which describe all the objects and fields in Salesforce (but not picklist values nor validations).
  • Call Salesforce describeSObjects web service, directly or by using Java or C# wrapper (consult Salesforce API). This way, you get what you need, and this the recommended way to export the metadata.
  • Use any of the numerous alternative tools available on the internet.

Prepare for the Next Data Migration

Cloud solutions, such as Salesforce, are ready instantly. If you are happy with the built-in functionalities, just log in and use it. However, Salesforce, like any other cloud CRM solution, brings specific problems to data migration topics to be aware of, in particular, regarding the performance and resources limits.

Moving legacy data into the new system is always a journey, sometimes a journey to history hidden in data from past years. In this article, based on a dozen migration projects, I presented 10 tips how to migrate legacy data and successfully avoid the most pitfalls.

The key is to understand what the data reveals. So, before you start the data migration, make sure you are well prepared for the potential problems your data may hold.

This article was originally posted on Toptal 


Usability for Conversion: Stop Using Fads, Start Using Data

When it comes to creating and designing a product, we are looking for the best solution to ensure we meet our goal. Ultimately, our goal will always be to convince the customer to buy our product or use our service; i.e., for converting leads into sales. But what can we do to ensure the highest conversion rate (i.e., of leads to sales) possible? When we look around for ways to understand what works with conversion and what doesn’t, we may encounter several fads or trends that presumptuously claim to know exactly what we need to do; things like changing a button to a particular color, using a particular picture or icon, or employing a certain layout. However, there is no one size fits all “magic bullet” to conversion. EVERY demographic is different, so we need to use our data and our knowledge of our specific targeted audience to create designs that convert. IF there is one single piece of advice that’s most important, it’s to focus on usability.

Usability and Conversion

Stop following trends to achieve your conversion rates.

Building your product and setting it loose.

You or your client have just launched your new website or product, but you are noticing that your conversionrate is dramatically low. To use an example for this exercise, let’s give a percentage: 0.3%. That’s only 3 out of every 1000 leads converting into customers. Presumably not what you’re looking for.

Concerned, you run off to Google and search ways to convert users and you find articles that say:

“Red converts better than green!” “Orange beats any color!” “Cat pictures! Everybody loves kittens!” “Pictures of people convert better!” “Pictures of products convert better!” “Company logos make you $$”

While each of these approaches may have in fact been useful in one or more scenarios, the likelihood that these “magic answers” are right for you is often slim at best. There’s no data behind the claim that making the button orange in all of our products will help our product convert better.

Another thing to point out when we read articles like the ones above is the quality of leads that the company is receiving. Although we would want as many leads as possible, we would also want to make sure that these are quality leads to getting better data and keep improving our product for that target audience. Having a ton of leads might sound exciting, but we end up wasting our time on leads that don’t go anywhere, in which case we are wasting money and losing opportunities on the leads that will move our company forward and help out product grow.

How do you identify your quality leads?

There are a couple of things to think about before you start the process of optimizing your site for conversion. Here’s a list that you should consider before you start optimizing your site:

  • Know Your Audience – What is your audience like? Their demographics? Their location? Is your website tailored to them?

  • Know Your Goals – What is the ultimate goal for the site? Are you looking to store emails? Get people to sign up for a service? Buy a product?

  • Know Your Usability Scores – How is your site performing in mobile? IS it responsive? How’s the speed of the site when it loads in the browser? How’s the navigation?

  • Check Your Content – Is your content easy to read? Is the language geared to the personality and education level of your targeted audience? Does the content clearly communicate your message/goal?

  • Check Your Fallout – Where are you losing your audience? What is your bounce rate? What is the average time visitors are spending on your pages? What are the high performing pages? What are the low performing pages?

Once you have all of these questions answered, you can start optimizing your site. You will notice that I didn’t touch on your colors or designs for the checklist. Although it was not mentioned, once you define your audience, analyze your website, and have clear goals, you will find that your design will either reflect or miss that.

Find your audience.

The most important thing is your audience. To find your audience you have to look at the process before you started working on the site.

Usability and Conversion

Find your audience.

Who are you targeting?

It is important that you have a precise definition of who are you targeting. If your product is intended for people around the age of 18 - 24, your content, design, and usability should reflect that. The easiest way to come up with all these descriptions is to create your own personas. Personas are fictitious or real people that describe a specific member of your audience. You need to write up everything that you need from them, like name, age, ethnicity, occupation, technology savviness, etc.

You can use tools like Google Analytics or other paid analytic tools to help obtain good in-depth information about your users. You can also perform some user testing in various websites like UserTesting.com or in person where you can develop your personas from them.

What are you targeting them for?

Another clear thing you need to have before you even start the design is the purpose of the site. Are you selling the user goods or are you providing a service? Hows does the site align with the company’s mission and vision? How does the goal align with your personas?

Defining usability data.

Once you have all this data written down, you can then proceed to check your usability stats. When it comes to mobile websites, there is a great tool I like to use to check my user experience (you’ve probably heard of it too): Google PageSpeed.

As a rule of thumb, you want your User Experience grade to be above 90. This means that things are clear and easy to tap/click, see, and navigate through your site. You also want to make sure your site speed is at least 80 or more. If you are 70, check the options to help you optimize your page for speed. Use services like CloudFlare, Highwinds, Akamai, etc. for caching and CDN (Content Delivery Network) to help improve speed.

For your desktop, I would suggest you use tools like Crazy EggGoogle AnalyticsVisual Web Optimizer, or any other heat map or visual guide software. These will help you find where people are focusing the most and identify pitfalls such as areas that don’t get much attention. If you combine some of these products, using heat maps, trackers, and Google Analytics you can identify your fallouts and what pages aren’t performing the way you want them to.

Another great way to test your product is by performing live user testing either at your site or at a formal user testing facility. UserTesting.com is a great tool that you can use to test your websites or mobile applications. You set up your questionnaire and identify your audience, and just wait for the results. Listen to their feedback, but watch what they do. Sometimes an action gives much more data than an answer to a question.

The next step on our list will be to check the content. This could be an incredible source of valuable information. Sometimes we focus on the design, when in reality changing a few words here and there will give you the conversion that you desire. Check your content before you decide to make any other design changes.

Everything checks out, then what is going on?

When you look at the design and start wondering what to change, keep in mind that you want to test for usability and not for conversion. I’ll explain this in a bit. One of the keys to a site that converts well is that the user trusts it. Repeat that again: One of the keys to a site that converts well is that the user trusts it.

Your presentation, colors, branding, content, everything creates an impact on the user and, in just a matter of seconds, you can lose a user or gain their full confidence.

Usability and Conversion

Your product colors

For colors, make sure they are all consistent with your brand and your company. Design for what you want the user to perceive when they first look at the site. Remember, you only have a few seconds before they go away. My general recommendation is that you create a 3 color palette.

  • Your primary color. This color is what most of the site will have. The color will portray your company/product’s vision.

  • Your secondary color. This color consists of the items you will use to bring attention to another section of the site while the user reads and digests your content. These would be the colors for your links, navigation, etc.

  • Your call-to-action color. This color is extremely important. The color of this button or link will let the user know that this button is performing an action (in our case, convert them). Normally this color should compliment the rest of the colors. You want this color to stand out, but not clash or take away from your brand.

To give you an example of a fad, there are sites that have claimed in the past that turning a button from green to red, or vice versa, will automatically increase your conversion rate. They will cite an example of how it worked for their site. Before you rush to change your colors, though, look at the design. Is your primary or secondary color red? If that is the case, then your button as red will just blend in with the rest of your product, people will ignore it. Are your colors clashing with red? That creates a distraction, not a conversion.


Layouts are going to be important for you to convert your users and these need to be very specific in terms of usability. You need to create harmony between your content strategy and the design emotion you want to evoke from the user when they see the landing page. Remember what we talked about before? Trust. Make sure your page engenders trust in the user.

A lot of people ask about hero banners and how effective they are in terms of improving conversion rates. The answer to this, like most things we’ve discussed, depends on your audience. Does your hero banner easily explain what the user wants? Then go for it. Test it out. Otherwise, consider other options that might fit better with your message.

Another example of a fad is hero carousels. You will notice that some websites will provide a big banner on their page, but just as you are reading it, the banner switches over and shows you more information. This rarely works well for usability. You are creating a stressful situation for the user, because now they have a time limit to finish reading what they first saw upon arrival. If you want to use carousels, make sure you make them with plenty of time for a user to finish reading the content of each slides, or just don’t auto-animate it.

Building forms

If you need the user to sign up for something, make that process obvious, easy, and readily accessible.

  • Do you really need all the fields you have on your sign up form?

  • Could more information be collected once you start building a relationship with your user rather than requiring it of them upfront?

If you need a lot of fields for your product form, consider splitting the form into different steps. Make the user follow your flow. Create a funnel that is enjoyable for the user.

Be clear on why you are asking for information. Do you need an address? Tell the user why you need it. You need a user to provide a phone number? Tell the user why. If you don’t need that information right away and you can build a relationship with an email, then go that route. It may take a little longer for you to “secure” that lead, but in the end, it will provide so much more quality to your brand and your business, and will probably yield more leads.

Elements inside your pages

Work with your content team (if you are not writing the content yourself) to discover things that you want to emphasize to the user to communicate that you are looking out for their best interest.

Typography plays an important role. Make sure that your main headline is the easiest to read and answers quickly the question that the user has when landing on your page. Use bullet points to engage the user with simple and quick answers. Give the user what they want, but entice them; educate them on why they need to know more. Once you build that trust and interest, your leads will start converting at a higher rate.

While no single image is magical, using imagery to complement your message is a valuable technique. Are you trying to offer a service that is good for the family? Look for images that complement this, like a happy family. Imagery can play a big role depending on your audience so, again, it is very important that you know your audience before you choose an image.

NOTE: A quick note about using stock photography, it should be common sense by now, but just in case, make sure you’re using stock photography that looks natural. You want the photo to enhance the story of your page, not just display people looking eerily happy while they look at the camera.

YET another example of a fad is showing a professional looking person as a “customer rep” which, 9 out of 10 times, is just a stock image to give a user a sense of trust. Users will be able to identify that these aren’t really the people taking care of them. Additionally, the product should be about your user, not you. How is the image going to make them feel? How does the image relate to the product you’re selling or the service you are providing?

Don’t have imagery? An illustration can help provide more information and instill confidence in your user. Again, when designing an illustration focus on what user needs, the emotion they should feel upon looking at the illustration, and so on.

Usability and Conversion

Usability instead of conversion

So how do you measure the success of your new designs? The main thing that you have to understand about conversion is that it can’t be completely broken down into categories. You have to test and test often. However, when you test, be specific on what you are trying to test. The more specific you can make your test, the better data you will collect to keep improving.

So why should you test for usability rather than conversion? Because when you test for usability you are by definition looking at things from the user’s perspective. The user will notice this and, if you can reach a level of trust between the user and your brand, you will be able to get a conversion. The keyword here for you is trust. If you build only to try and “trick” the user into converting, you will end up damaging a relationship with that user, which will cause you to lose the confidence and trust from that user and many others.

Build trust and build relationships. This can’t be emphasized enough. When you build trust with your users, you keep them coming back and you help promote your business indirectly by word of mouth. People are very active in social media and other areas of their lives. Getting positive reviews will help you get more confidence with new users and better leads.

What is another great thing about usability? SEO. In order to start gaining more leads, you need to drive more people to your website. Usability will not only create a great user experience but it can help you stand out from your competitors on Google Searches, etc. Google has put a huge focus on giving the user what they need, just by searching, so sites that demonstrate the capability to provide the users with that information get ahead from others who are just trying to beat or game the system.

Let’s recap. Evaluate your site and follow the checklist provided above. Test for usability and test often. Focus your design on helping the users reach their goals. Design to build trust, not to trick. The more trust you can build with the user, the stronger the relationship and the higher quality conversion you will receive.

Happy converting! :D

This article was originally posted on Toptal 

How To Improve ASP.NET App Performance In Web Farm With Caching

There are only two hard things in Computer Science: cache invalidation and naming things.

A Brief Introduction to Caching

Caching is a powerful technique for increasing performance through a simple trick: Instead of doing expensive work (like a complicated calculation or complex database query) every time we need a result, the system can store – or cache – the result of that work and simply supply it the next time it is requested without needing to reperform that work (and can, therefore, respond tremendously faster).

Of course, the whole idea behind caching works only as long the result we cached remains valid. And here we get to the actual hard part of the problem: How do we determine when a cached item has become invalid and needs to be recreated?

Caching is a powerful technique for increasing performance

The ASP.NET in-memory cache is extremely fast
and perfect to solve distributed web farm caching problem.

Usually, a typical web application has to deal with a much higher volume of read requests than write requests. That is why a typical web application that is designed to handle a high load is architected to be scalable and distributed, deployed as a set of web tier nodes, usually called a farm. All these facts have an impact on the applicability of caching.

In this article, we focus on the role caching can play in assuring high throughput and performance of web applications designed to handle a high load, and I am going to use the experience from one of my projects and provide an ASP.NET-based solution as an illustration.

The Problem of Handling a High Load

The actual problem I had to solve wasn’t an original one. My task was to make an ASP.NET MVC monolithic web application prototype be capable of handling a high load.

The necessary steps towards improving throughput capabilities of a monolithic web application are:

  • Enable it to run multiple copies of the web application in parallel, behind a load balancer, and serve all concurrent requests effectively (i.e., make it scalable).
  • Profile the application to reveal current performance bottlenecks and optimize them.
  • Use caching to increase read request throughput, since this typically constitutes a significant part of the overall applications load.

Caching strategies often involve use of some middleware caching server, like Memcached or Redis, to store the cached values. Despite their high adoption and proven applicability, there are some downsides to these approaches, including:

  • Network latencies introduced by accessing the separate cache servers can be comparable to the latencies of reaching the database itself.
  • The web tier’s data structures can be unsuitable for serialization and deserialization out of the box. To use cache servers, those data structures should support serialization and deserialization, which requires ongoing additional development effort.
  • Serialization and deserialization add runtime overhead with an adverse effect on performance.

All these issues were relevant in my case, so I had to explore alternative options.

How caching works

The built-in ASP.NET in-memory cache (System.Web.Caching.Cache) is extremely fast and can be used without serialization and deserialization overhead, both during the development and at the runtime. However, ASP.NET in-memory cache has also its own drawbacks:

  • Each web tier node needs its own copy of cached values. This could result in higher database tier consumption upon node cold start or recycling.
  • Each web tier node should be notified when another node makes any portion of the cache invalid by writing updated values. Since the cache is distributed and without proper synchronization, most of the nodes will return old values which is typically unacceptable.

If the additional database tier load won’t lead to a bottleneck by itself, then implementing a properly distributed cache seems like an easy task to handle, right? Well, it’s not an easy task, but it is possible. In my case, benchmarks showed that the database tier shouldn’t be a problem, as most of the work happened in the web tier. So, I decided to go with the ASP.NET in-memory cache and focus on implementing the proper synchronization.

Introducing an ASP.NET-based Solution

As explained, my solution was to use the ASP.NET in-memory cache instead of the dedicated caching server. This entails each node of the web farm having its own cache, querying the database directly, performing any necessary calculations, and storing results in a cache. This way, all cache operations will be blazing fast thanks to the in-memory nature of the cache. Typically, cached items have a clear lifetime and become stale upon some change or writing of new data. So, from the web application logic, it is usually clear when the cache item should be invalidated.

The only problem left here is that when one of the nodes invalidates a cache item in its own cache, no other node will know about this update. So, subsequent requests serviced by other nodes will deliver stale results. To address this, each node should share its cache invalidations with the other nodes. Upon receiving such invalidation, other nodes could simply drop their cached value and get a new one at the next request.

Here, Redis can come into play. The power of Redis, compared to other solutions, comes from its Pub/Sub capabilities. Every client of a Redis server can create a channel and publish some data on it. Any other client is able to listen to that channel and receive the related data, very similar to any event-driven system. This functionality can be used to exchange cache invalidation messages between the nodes, so all nodes will be able to invalidate their cache when it is needed.

A group of ASP.NET web tier nodes using a Redis backplane

ASP.NET’s in-memory cache is straightforward in some ways and complex in others. In particular, it is straightforward in that it works as a map of key/value pairs, yet there is a lot of complexity related to its invalidation strategies and dependencies.

Fortunately, typical use cases are simple enough, and it’s possible to use a default invalidation strategy for all the items, enabling each cache item to have only a single dependency at most. In my case, I ended with the following ASP.NET code for the interface of the caching service. (Note that this is not the actual code, as I omitted some details for the sake of simplicity and the proprietary license.)

public interface ICacheKey
string Value { get; }
public interface IDataCacheKey : ICacheKey { }
public interface ITouchableCacheKey : ICacheKey { }
public interface ICacheService
int ItemsCount { get; }
T Get<T>(IDataCacheKey key, Func<T> valueGetter);
T Get<T>(IDataCacheKey key, Func<T> valueGetter, ICacheKey dependencyKey);

Here, the cache service basically allows two things. First, it enables storing the result of some value getter function in a thread safe manner. Second, it ensures that the then-current value is always returned when it is requested. Once the cache item becomes stale or is explicitly evicted from the cache, the value getter is called again to retrieve a current value. The cache key was abstracted away by ICacheKey interface, mainly to avoid hard-coding of cache key strings all over the application.

To invalidate cache items, I introduced a separate service, which looked like this:

public interface ICacheInvalidator
bool IsSessionOpen { get; }
void OpenSession();
void CloseSession();
void Drop(IDataCacheKey key);
void Touch(ITouchableCacheKey key);
void Purge();

Besides basic methods of dropping items with data and touching keys, which only had dependent data items, there are a few methods related to some kind of “session”.

Our web application used Autofac for dependency injection, which is an implementation of the inversion of control (IoC) design pattern for dependencies management. This feature allows developers to create their classes without the need to worry about dependencies, as the IoC container manages that burden for them.

The cache service and cache invalidator have drastically different lifecycles regarding IoC. The cache service was registered as a singleton (one instance, shared between all clients), while the cache invalidator was registered as an instance per request (a separate instance was created for each incoming request). Why?

The answer has to do with an additional subtlety we needed to handle. The web application is using a Model-View-Controller (MVC) architecture, which helps mainly in the separation of UI and logic concerns. So, a typical controller action is wrapped into a subclass of an ActionFilterAttribute. In the ASP.NET MVC framework, such C#-attributes are used to decorate the controller’s action logic in some way. That particular attribute was responsible for opening a new database connection and starting a transaction at the beginning of the action. Also, at the end of the action, the filter attribute subclass was responsible for committing the transaction in case of success and rolling it back in the event of failure.

If cache invalidation happened right in the middle of the transaction, there could be race condition whereby the next request to that node would successfully put the old (still visible to other transactions) value back into the cache. To avoid this, all invalidations are postponed until the transaction is committed. After that, cache items are safe to evict and, in the case of a transaction failure, there is no need for cache modification at all.

That was the exact purpose of the “session”-related parts in the cache invalidator. Also, that is the purpose of its lifetime being bound to the request. The ASP.NET code looked like this:

class HybridCacheInvalidator : ICacheInvalidator
public void Drop(IDataCacheKey key)
if (key == null)
throw new ArgumentNullException("key");
if (!IsSessionOpen)
throw new InvalidOperationException("Session must be opened first.");
_postponedRedisMessages.Add(new Tuple<string, string>("drop", key.Value));
public void CloseSession()
if (!IsSessionOpen)
_postponedRedisMessages.ForEach(m => PublishRedisMessageSafe(m.Item1, m.Item2));
_postponedRedisMessages = null;

The PublishRedisMessageSafe method here is responsible for sending the message (second argument) to a particular channel (first argument). In fact, there are separate channels for drop and touch, so the message handler for each of them knew exactly what to do - drop/touch the key equal to the received message payload.

One of the tricky parts was to manage the connection to the Redis server properly. In the case of the server going down for any reason, the application should continue to function correctly. When Redis is back online again, the application should seamlessly start to use it again and exchange messages with other nodes again. To achieve this, I used the StackExchange.Redis library and the resulting connection management logic was implemented as follows:

class HybridCacheService : ...
public void Initialize()
Multiplexer = ConnectionMultiplexer.Connect(_configService.Caching.BackendServerAddress);
Multiplexer.ConnectionFailed += (sender, args) => UpdateConnectedState();
Multiplexer.ConnectionRestored += (sender, args) => UpdateConnectedState();
catch (Exception ex)
private void UpdateConnectedState()
if (Multiplexer.IsConnected && _currentCacheService is NoCacheServiceStub) {
_currentCacheService = _inProcCacheService;
_logger.Debug("Connection to remote Redis server restored, switched to in-proc mode.");
} else if (!Multiplexer.IsConnected && _currentCacheService is InProcCacheService) {
_currentCacheService = _noCacheStub;
_logger.Debug("Connection to remote Redis server lost, switched to no-cache mode.");

Here, ConnectionMultiplexer is a type from the StackExchange.Redis library, which is responsible for transparent work with underlying Redis. The important part here is that, when a particular node loses connection to Redis, it falls back to no cache mode to make sure no request will receive stale data. After the connection is restored, the node starts to use the in-memory cache again.

Here are examples of action without usage of the cache service (SomeActionWithoutCaching) and an identical operation which uses it (SomeActionUsingCache):

class SomeController : Controller
public ISomeService SomeService { get; set; }
public ICacheService CacheService { get; set; }
public ActionResult SomeActionWithoutCaching()
return View(
public ActionResult SomeActionUsingCache()
return View(
/* Cache key creation omitted */,
() => SomeService.GetModelData()

A code snippet from an ISomeService implementation could look like this:

class DefaultSomeService : ISomeService
public ICacheInvalidator _cacheInvalidator;
public SomeModel GetModelData()
return /* Do something to get model data. */;
public void SetModelData(SomeModel model)
/* Do something to set model data. */
_cacheInvalidator.Drop(/* Cache key creation omitted */);

Benchmarking and Results

After the caching ASP.NET code was all set, it was time to use it in the existing web application logic, and benchmarking can be handy to decide where to put most efforts of rewriting the code to use the caching. It’s crucial to pick out a few most operationally common or critical use cases to be benchmarked. After that, a tool like Apache jMeter could be used for two things:

  • To benchmark these key use cases via HTTP requests.
  • To simulate high load for the web node under test.

To get a performance profile, any profiler which is capable of attaching to the IIS worker process could be used. In my case, I used JetBrains dotTrace Performance. After some time spent experimenting to determine the correct jMeter parameters (such as concurrent and requests count), it becomes possible to start to collect performance snapshots, which are very helpful in identifying the hotspots and bottlenecks.

In my case, some use cases showed that about 15%-45% overall code execution time was spent in the database reads with the obvious bottlenecks. After I applied caching, performance nearly doubled (i.e., was twice as fast) for most of them.


As you may see, my case could seem like an example of what is usually called “reinventing the wheel”: Why bother to try to create something new, when there are already best practices widely applied out there? Just set up a Memcached or Redis, and let it go.

I definitely agree that usage of best practices is usually the best option. But before blindly applying any best practice, one should ask oneself: How applicable is this “best practice”? Does it fit my case well?

The way I see it, proper options and tradeoff analysis is a must upon making any significant decision, and that was the approach I chose because the problem was not so easy. In my case, there were many factors to consider, and I did not want to take a one-size-fits-all solution when it might not be the right approach for the problem at hand.

In the end, with the proper caching in place, I did get almost 50% performance increase over the initial solution.

Source: Toptal  

Tips & Tricks for Any Developers Successful Online Portfolio

At Toptal we screen a lot of designers, so over time we have learned what goes into making a captivating and coherent portfolio. Each designer’s portfolio is like an introduction to an individual designer’s skill set and strengths and represents them to future employers, clients and other designers. It shows both past work, but also future direction. There are several things to keep in mind when building a portfolio, so here is the Toptal Guide of tips and common mistakes for portfolio design.

1. Content Comes First

The main use of the portfolio is to present your design work. Thus, the content should inform the layout and composition of the document. Consider what kind of work you have, and how it might be best presented. A UX designer may require a series of animations to describe a set of actions, whereas the visual designer may prefer spreads of full images.

The portfolio design itself is an opportunity to display your experiences and skills. However, excessive graphic flourishes shouldn’t impede the legibility of the content. Instead, consider how the backgrounds of your portfolio can augment or enhance your work. The use of similar colors as the content in the background will enhance the details of your project. Lighter content will stand out against dark backgrounds. Legibility is critical, so ensure that your portfolio can be experienced in any medium, and considers all accessibility issues such as color palettes and readability.

You should approach your portfolio in the same manner you would any project. What is the goal here? Present it in a way that makes sense to viewers who are not essentially visually savvy. Edit out projects that may be unnecessary. Your portfolio should essentially be a taster of what you can do, a preparation for the client of what to expect to see more of in the interview. The more efficiently that you can communicate who you are as a designer, the better.

2. Consider Your Target Audience

A portfolio for a client should likely be different than a portfolio shown to a blog editor, or an art director. Your professional portfolio should always cater to your target audience. Edit it accordingly. If your client needs branding, then focus on your branding work. If your client needs UX Strategy than make sure to showcase your process.

Even from client to client, or project to project your portfolio will need tweaking. If you often float between several design disciplines, as many designers do, it would be useful to curate a print designer portfolio separately from a UX or visual design portfolio.

3. Tell the Stories of Your Projects

As the design industry has evolved, so have our clients, and their appreciation for our expertise and what they hire us to do. Our process is often as interesting and important to share with them, as the final deliverables. Try to tell the story of your product backwards, from final end point through to the early stages of the design process. Share your sketches, your wireframes, your user journeys, user personas, and so on.

Showing your process allows the reader to understand how you think and work through problems. Consider this an additional opportunity to show that you have an efficient and scalable process..

4. Be Professional in Your Presentation

Attention to detail, both in textual and design content are important aspects of any visual presentation, so keep an eye on alignment, image compression, embedded fonts and other elements, as you would any project. The careful treatment of your portfolio should reflect how you will handle your client’s work.

With any presentation, your choice of typeface will impact the impression you give, so do research the meaning behind a font family, and when in doubt, ask your typography savvy friends for advice.

5. Words Are As Important As Work

Any designer should be able to discuss their projects as avidly as they can design them. Therefore your copywriting is essential. True, your work is the main draw of the portfolio - however the text, and how you write about your work can give viewers insight into your portfolio.

Not everyone who sees your work comes from a creative, or visual industry. Thus, the descriptive text that you provide for images is essential. At the earlier stages of a project, where UX is the main focus, often you will need to complement your process with clearly defined content, both visual diagrams, and textual explanation.

Text can also be important for providing the context of the project. Often much of your work is done in the background, so why not present it somehow? What was the brief, how did the project come about?

Avoid These Common Mistakes

The culture of the portfolio networks like Behance or Dribble have cultivated many bad habits and trends in portfolio design. A popular trend is the perspective view of a product on a device. However, these images often do little to effectively represent the project, and hide details and content. Clients need to see what you have worked on before, with the most logical visualisation possible. Showcasing your products in a frontal view, with an “above the fold” approach often makes more sense to the non-visual user. Usually, the best web pages and other digital content are presented with no scrolling required. Avoid sending your website portfolio as one long strip, as this is only appropriate for communicating with developers.

Ensure that you cover the bases on all portfolio formats. Today it is expected for you to have an online presence, however some clients prefer that you send a classic A4 or US letterhead sized PDF. You need to have the content ready for any type of presentation.

Try to use a consistent presentation style and content throughout the projects in your portfolio. Differentiate each project with simple solutions like different coloured backgrounds, or textures, yet within the same language.


Source: Toptal 


Getting Started with Elixir Programming Language

If you have been reading blog posts, hacker news threads, your favorite developers tweets or listening to podcasts, at this point you’ve probably heard about the Elixir programming language. The language was created by José Valim, a well known developer in the open-source world. You may know him from the Ruby on Rails MVC framework or from devise and simple_form ruby gems him and his co-workers from the Plataformatec have been working on in the last few years.

According the José Valim, Elixir was born in 2011. He had the idea to build the new language due the lack of good tools to solve the concurrency problems in the ruby world. At that time, after spending time studying concurrency and distributed focused languages, he found two languages that he liked, Erlang and Clojure which run in the JVM. He liked everything he saw in the Erlang language (Erlang VM) and he hated the things he didn’t see, like polymorphism, metaprogramming and language extendability attributes which Clojure was good at. So, Elixir was born with that in mind, to have an alternative for Clojure and a dynamic language which runs in the Erlang Virtual Machine with good extendability support.

Getting Started with Elixir Programming Language

Elixir describes itself as a dynamic, functional language with immutable state and an actor based approach to concurrency designed for building scalable and maintainable applications with a simple, modern and tidy syntax. The language runs in the Erlang Virtual Machine, a battle proof, high-performance and distributed virtual machine known for its low latency and fault tolerance characteristics.

Before we see some code, it’s worth saying that Elixir has been accepted by the community which is growing. If you want to learn Elixir today you will easily find books, libraries, conferences, meetups, podcasts, blog posts, newsletters and all sorts of learning sources out there as well as it was accepted by the Erlang creators.

Let’s see some code!

Install Elixir:

Installing Elixir is super easy in all major platforms and is an one-liner in most of them.

Arch Linux

Elixir is available on Arch Linux through the official repositories:

pacman -S elixir


Installing Elixir in Ubuntu is a bit tidious. But it is easy enough nonetheless.

wget https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb && sudo dpkg -i erlang-solutions_1.0_all.deb
apt-get update
apt-get install esl-erlang
apt-get install elixir


Install Elixir in OS X using Homebrew.

brew install elixir

Meet IEx

After the installation is completed, it’s time to open your shell. You will spend a lot of time in your shell if you want to develop in Elixir.

Elixir’s interactive shell or IEx is a REPL - (Read Evaluate Print Loop) where you can explore Elixir. You can input expressions there and they will be evaluated giving you immediate feedback. Keep in mind that your code is truly evaluated and not compiled, so make sure not to run profiling nor benchmarks in the shell.

The Break Command

There’s an important thing you need to know before you start the IEx RELP - how to exit it.

You’re probably used to hitting CTRL+C to close the programs running in the terminal. If you hit CTRL+C in the IEx RELP, you will open up the Break Menu. Once in the break menu, you can hit CTRL+C again to quit the shell as well as pressing a.

I’m not going to dive into the break menu functions. But, let’s see a few IEx helpers!


IEx provides a bunch of helpers, in order to list all of them type: h().

And this is what you should see:

Those are some of my favorites, I think they will be yours as well.

  • h as we just saw, this function will print the helper message.
  • h/1 which is the same function, but now it expects one argument.

For instance, whenever you want to see the documentation of the String strip/2 method you can easily do:

Probably the second most useful IEx helper you’re going to use while programming in Elixir is the c/2, which compiles a given elixir file (or a list) and expects as a second parameter a path to write the compiled files to.

Let’s say you are working in one of the http://exercism.io/ Elixir exersices, the Anagram exercise.

So you have implemented the Anagram module, which has the method match/2 in the anagram.exs file. As the good developer you are, you have written a few specs to make sure everything works as expected as well.

This is how your current directory looks:

Now, in order to run your tests against the Anagram module you need to run/compile the tests.

As you just saw, in order to compile a file, simply invoke the elixir executable passing as argument path to the file you want to compile.

Now let’s say you want to run the IEx REPL with the Anagram module accessible in the session context. There are two commonly used options. The first is you can require the file by using the options -r, something like iex -r anagram.exs. The second one, you can compile right from the IEx session.

Simple, just like that!

Ok, what about if you want to recompile a module? Should you exit the IEx, run it again and compile the file again? Nope! If you have a good memory, you will remember that when we listed all the helpers available in the IEx RELP, we saw something about a recompile command. Let’s see how it works.

Notice that this time, you passed as an argument the module itself and not the file path.

As we saw, IEx has a bunch of other useful helpers that will help you learn and understand better how an Elixir program works.

Basics of Elixir Types


There are two types of numbers. Arbitrary sized integers and floating points numbers.


Integers can be written in the decimal base, hexadecimal, octal and binary.

As in Ruby, you can use underscore to separate groups of three digits when writing large numbers. For instance you could right a hundred million like this:









Floare are IEEE 754 double precision. They have 16 digits of accuracy and a maximum exponent of around 10308.

Floats are written using a decimal point. There must be at least one digit before and after the point. You can also append a trailing exponent. For instance 1.0, 0.3141589e1, and 314159.0-e.


Atoms are constants that represent names. They are immutable values. You write an atom with a leading colon : and a sequence of letters, digits, underscores, and at signs @. You can also write them with a leading colon : and an arbitrary sequence of characters enclosed by quotes.

Atoms are a very powerful tool, they are used to reference erlang functions as well as keys and Elixir methods.

Here are a few valid atoms.

:name, :first_name, :"last name",  :===, :is_it_@_question?


Of course, booleans are true and false values. But the nice thing about them is at the end of the day, they’re just atoms.


By default, strings in Elixir are UTF-8 compliant. To use them you can have an arbitrary number of characters enclosed by " or '. You can also have interpolated expressions inside the string as well as escaped characters.

Be aware that single quoted strings are actually a list of binaries.

Anonymous Functions

As a functional language, Elixir has anonymous functions as a basic type. A simple way to write a function is fn (argument_list) -> body end. But a function can have multiple bodies with multiple argument lists, guard clauses, and so on.

Dave Thomas, in the Programming Elixir book, suggests we think of fn…end as being the quotes that surround a string literal, where instead of returning a string value we are returning a function.


Tuple is an immutable indexed array. They are fast to return its size and slow to append new values due its immutable nature. When updating a tuple, you are actually creating a whole new copy of the tuple self.

Tuples are very often used as the return value of an array. While coding in Elixir you will very often see this, {:ok, something_else_here}.

Here’s how we write a tuple: {?a,?b,?c}.

Pattern Matching

I won’t be able to explain everything you need to know about Pattern Matching, however what you are about to read covers a lot of what you need to know to get started.

Elixir uses = as a match operator. To understand this, we kind of need to unlearn what we know about = in other traditional languages. In traditional languages the equals operator is for assignment. In Elixir, the equals operators is for pattern matching.

So, that’s the way it works values in the left hand side. If they are variables they are bound to the right hand side, if they are not variables elixir tries to match them with the right hand side.

Pin Operator

Elixir provides a way to always force pattern matching against the variable in the left hand side, the pin operator.


In Elixir, Lists look like arrays as we know it from other languages but they are not. Lists are linked structures which consist of a head and a tail.

Keyword Lists

Keyword Lists are a list of Tuple pairs.

You simply write them as lists. For instance: [{:one, 1}, 2, {:three, 3}]. There’s a shortcut for defining lists, here’s how it looks: [one: 1, three: 3].

In order to retrieve an item from a keyword list you can either use:

Keyword.get([{:one, 1}, 2, {:three, 3}], :one)

Or use the shortcut:

[{:one, 1}, 2, {:three, 3}][:one]

Because keyword lists are slow when retrieving a value, in it is an expensive operation, so if you are storing data that needs fast access you should use a Map.


Maps are an efficient collection of key/value pairs. The key can have any value you want as a key, but usually should be the same type. Different from keyword lists, Maps allow only one entry for a given key. They are efficient as they grow and they can be used in the Elixir pattern matching in general use maps when you need an associative array.

Here’s how you can write a Map:

%{ :one => 1, :two => 2, 3 => 3, "four" => 4, [] => %{}, {} => [k: :v]}


Elixir is awesome, easy to understand, has simple but powerful types and very useful tooling around it which will help you when beginning to learn. In this first part, we have covered the various data types Elixir programs are built on and the operators that power them. In later parts we will dive deeper into the world of Elixir - functional and concurrent programming.

Source: Toptal 

How Sequel and Sinatra Solve Ruby’s API Problem


In recent years, the number of JavaScript single page application frameworks and mobile applications has increased substantially. This imposes a correspondingly increased demand for server-side APIs. With Ruby on Rails being one of the today’s most popular web development frameworks, it is a natural choice among many developers for creating back-end API applications.

Yet while the Ruby on Rails architectural paradigm makes it quite easy to create back-end API applications, using Rails only for the API is overkill. In fact, it’s overkill to the point that even the Rails team has recognized this and has therefore introduced a new API-only mode in version 5. With this new feature in Ruby on Rails, creating API-only applications in Rails became an even easier and more viable option.

But there are other options too. The most notable are two very mature and powerful gems, which in combination provide powerful tools for creating server-side APIs. They are Sinatra and Sequel.

Both of these gems have a very rich feature set: Sinatra serves as the domain specific language (DSL) for web applications, and Sequel serves as the object-relational mapping (ORM) layer. So, let’s take a brief look at each of them.

API With Sinatra and Sequel: Ruby Tutorial

Ruby API on a diet: introducing Sequel and Sinatra.


Sinatra is Rack-based web application framework. The Rack is a well known Ruby web server interface. It is used by many frameworks, like Ruby on Rails, for example, and supports lot of web servers, like WEBrick, Thin, or Puma. Sinatra provides a minimal interface for writing web applications in Ruby, and one of its most compelling features is support for middleware components. These components lie between the application and the web server, and can monitor and manipulate requests and responses.

For utilizing this Rack feature, Sinatra defines internal DSL for creating web applications. Its philosophy is very simple: Routes are represented by HTTP methods, followed by a route matching a pattern. A Ruby block within which request is processed and the response is formed.

get '/' do
'Hello from sinatra'

The route matching pattern can also include a named parameter. When route block is executed, a parameter value is passed to the block through the params variable.

get '/players/:sport_id' do
# Parameter value accessible through params[:sport_id]

Matching patterns can use splat operator * which makes parameter values available through params[:splat].

get '/players/*/:year' do
# /players/performances/2016
# Parameters - params['splat'] -> ['performances'], params[:year] -> 2016

This is not the end of Sinatra’s possibilities related to route matching. It can use more complex matching logic through regular expressions, as well as custom matchers.

Sinatra understands all of the standard HTTP verbs needed for creating a REST API: Get, Post, Put, Patch, Delete, and Options. Route priorities are determined by the order in which they are defined, and the first route that matches a request is the one that serves that request.

Sinatra applications can be written in two ways; using classical or modular style. The main difference between them is that, with the classical style, we can have only one Sinatra application per Ruby process. Other differences are minor enough that, in most cases, they can be ignored, and the default settings can be used.

Classical Approach

Implementing classical application is straightforward. We just have to load Sinatra and implement route handlers:

require 'sinatra'
get '/' do
'Hello from Sinatra'

By saving this code to demo_api_classic.rb file, we can start the application directly by executing the following command:

ruby demo_api_classic.rb

However, if the application is to be deployed with Rack handlers, like Passenger, it is better to start it with the Rack configuration config.ru file.

require './demo_api_classic'
run Sinatra::Application

With the config.ru file in place, the application is started with the following command:

rackup config.ru

Modular Approach

Modular Sinatra applications are created by subclassing either Sinatra::Base or Sinatra::Application:

require 'sinatra'
class DemoApi < Sinatra::Application
# Application code
run! if app_file == $0

The statement beginning with run! is used for starting the application directly, with ruby demo_api.rb, just as with the classical application. On the other hand, if the application is to be deployed with Rack, the handlers content of rackup.ru must be:

require './demo_api'
run DemoApi


Sequel is the second tool in this set. In contrast to ActiveRecord, which is part of the Ruby on Rails, Sequel’s dependencies are very small. At the same time, it is quite feature rich and can be used for all kinds of database manipulation tasks. With its simple domain specific language, Sequel relieves the developer from all the problems with maintaining connections, constructing SQL queries, fetching data from (and sending data back to) the database.

For example, establishing a connection with the database is very simple:

DB = Sequel.connect(adapter: :postgres, database: 'my_db', host: 'localhost', user: 'db_user')

The connect method returns a database object, in this case, Sequel::Postgres::Database, which can be further used to execute raw SQL.

DB['select count(*) from players']

Alternatively, to create a new dataset object:


Both of these statements create a dataset object, which is a basic Sequel entity.

One of the most important Sequel dataset features is that it does not execute queries immediately. This makes it possible to store datasets for later use and, in most cases, to chain them.

users = DB[:players].where(sport: 'tennis')

So, if a dataset does not hit the database immediately, the question is, when does it? Sequel executes SQL on the database when so-called “executable methods” are used. These methods are, to name a few, alleach,mapfirst, and last.

Sequel is extensible, and its extensibility is a result of a fundamental architectural decision to build a small core complemented with a plugin system. Features are easily added through plugins which are, actually, Ruby modules. The most important plugin is the Model plugin. It is an empty plugin which does not define any class or instance methods by itself. Instead, it includes other plugins (submodules) which define a class, instance or model dataset methods. The Model plugin enables the use of Sequel as the object-relational-mapping (ORM) tool and is often referred to as the “base plugin”.

class Player < Sequel::Model

The Sequel model automatically parses the database schema and sets up all necessary accessor methods for all columns. It assumes that table name is plural and is an underscored version of the model name. In case there is a need to work with databases that do not follow this naming convention, the table name can be explicitly set when the model is defined.

class Player < Sequel::Model(:player)

So, we now have everything we need to start building the back-end API.

Read the full article from Toptal

The Six Commandments of Good Code: Write Code that Stands the Test of Time

Humans have only been grappling with the art and science of computer programming for roughly half a century. Compared to most arts and sciences, computer science is in many ways still just a toddler, walking into walls, tripping over its own feet, and occasionally throwing food across the table. As a consequence of its relative youth, I don’t believe we have a consensus yet on what a proper definition of “good code” is, as that definition continues to evolve. Some will say “good code” is code with 100% test coverage. Others will say it’s super fast and has a killer performance and will run acceptably on 10 year old hardware. While these are all laudable goals for software developers, however I venture to throw another target into the mix: maintainability. Specifically, “good code” is code that is easily and readily maintainable by an organization (not just by its author!) and will live for longer than just the sprint it was written in. The following are some things I’ve discovered in my career as an engineer at big companies and small, in the USA and abroad, that seem to correlate with maintainable, “good” software.

Never settle for code that just "works." Write superior code.

Commandment #1: Treat Your Code the Way You Want Other’s Code to Treat You

I’m far from the first person to write that the primary audience for your code is not the compiler/computer, but whomever next has to read, understand, maintain, and enhance the code (which will not necessarily be you 6 months from now). Any engineer worth their pay can produce code that “works”; what distinguishes a superb engineer is that they can write maintainable code efficiently that supports a business long term, and have the skill to solve problems simply and in a clear and maintainable way.

In any programming language, it is possible to write good code or bad code. Assuming we judge a programming language by how well it facilitates writing good code (it should at least be one of the top criteria, anyway), any programming language can be “good” or “bad” depending on how it is used (or abused).

An example of a language that by many is considered ‘clean’ and readable is Python. The language itself enforces some level of white space discipline and the built in APIs are plentiful and fairly consistent. That said, it’s possible to create unspeakable monsters. For example, one can define a class and define/redefine/undefine any and every method on that class during runtime (often referred to as monkey patching). This technique naturally leads to at best an inconsistent API and at worst an impossible to debug monster. One might naively think,”sure, but nobody does that!” Unfortunately they do, and it doesn’t take long browsing pypi before you run into substantial (and popular!) libraries that (ab)use monkey patching extensively as the core of their APIs. I recently used a networking library whose entire API changes depending on the network state of an object. Imagine, for example, calling client.connect() and sometimes getting a MethodDoesNotExist error instead of HostNotFound or NetworkUnavailable.

Commandment #2: Good Code Is Easily Read and Understood, in Part and in Whole

Good code is easily read and understood, in part and in whole, by others (as well as by the author in the future, trying to avoid the “Did I really write that?” syndrome).

By “in part” I mean that, if I open up some module or function in the code, I should be able to understand what it does without having to also read the entire rest of the codebase. It should be as intuitive and self-documenting as possible.

Code that constantly references minute details that affect behavior from other (seemingly irrelevant) portions of the codebase is like reading a book where you have to reference the footnotes or an appendix at the end of every sentence. You’d never get through the first page!

Some other thoughts on “local” readability:

  • Well encapsulated code tends to be more readable, separating concerns at every level.

  • Names matter. Activate Thinking Fast and Slow’ssystem 2 way in which the brain forms thoughts and put some actual, careful thought into variable and method names. The few extra seconds can pay significant dividends. A well-named variable can make the code much more intuitive, whereas a poorly-named variable can lead to headfakes and confusion.

  • Cleverness is the enemy. When using fancy techniques, paradigms, or operations (such as list comprehensions or ternary operators), be careful to use them in a way that makes your code morereadable, not just shorter.

  • Consistency is a good thing. Consistency in style, both in terms of how you place braces but also in terms of operations, improves readability greatly.

  • Separation of concerns. A given project manages an innumerable number of locally important assumptions at various points in the codebase. Expose each part of the codebase to as few of those concerns as possible. Say you had a people management system where a person object may sometimes have a null last name. To somebody writing code in a page that displays person objects, that could be really awkward! And unless you maintain a handbook of “Awkward and non obvious assumptions our codebase has” (I know I don’t) your display page programmer is not going to know last names can be null and is probably going to write code with a null pointer exception in it when the last name-being null case shows up. Instead handle these cases with well thought out APIs and contracts that different pieces of your codebase use to interact with each other.

Commandment #3: Good Code Has a Well Thought-out Layout and Architecture to Make Managing State Obvious

State is the enemy. Why? Because it is the single most complex part of any application and needs to be dealt with very deliberately and thoughtfully. Common problems include database inconsistencies, partial UI updates where new data isn’t reflected everywhere, out of order operations, or just mind numbingly complex code with if statements and branches everywhere leading to difficult to read and even harder to maintain code. Putting state on a pedestal to be treated with great care, and being extremely consistent and deliberate with regard to how state is accessed and modified, dramatically simplifies your codebase. Some languages (Haskell for example) enforce this at a programmatic and syntactic level. You’d be amazed how much the clarity of your codebase can improve if you have libraries of pure functions that access no external state, and then a small surface area of stateful code which references the outside pure functionality.

Commandment #4: Good Code Doesn’t Reinvent the Wheel, it Stands on the Shoulders of Giants

Before potentially reinventing a wheel, think about how common the problem is you’re trying to solve or the function is you’re trying to perform. Somebody may have already implemented a solution you can leverage. Take the time to think about and research any such options, if appropriate and available.

That said, a completely reasonable counter-argument is that dependencies don’t come for “free” without any downside. By using a 3rd party or open source library that adds some interesting functionality, you are making the commitment to, and becoming dependent upon, that library. That’s a big commitment; if it’s a giant library and you only need a small bit of functionality do you really want the burden of updating the whole library if you upgrade, for example, to Python 3.x? And moreover, if you encounter a bug or want to enhance the functionality, you’re either dependent on the author (or vendor) to supply the fix or enhancement, or, if it’s open source, find yourself in the position of exploring a (potentially substantial) codebase you’re completely unfamiliar with trying to fix or modify an obscure bit of functionality.

Certainly the more well used the code you’re dependent upon is, the less likely you’ll have to invest time yourself into maintenance. The bottom line is that it’s worthwhile for you to do your own research and make your own evaluation of whether or not to include outside technology and how much maintenance that particular technology will add to your stack.

Below are some of the more common examples of things you should probably not be reinventing in the modern age in your project (unless these ARE your projects).


Figure out which of CAP you need for your project, then chose the database with the right properties. Database doesn’t just mean MySQL anymore, you can chose from:

  • “Traditional” Schema’ed SQL: Postgres / MySQL / MariaDB / MemSQL / Amazon RDS, etc.
  • Key Value Stores: Redis / Memcache / Riak
  • NoSQL: MongoDB/Cassandra
  • Hosted DBs: AWS RDS / DynamoDB / AppEngine Datastore
  • Heavy lifting: Amazon MR / Hadoop (Hive/Pig) / Cloudera / Google Big Query
  • Crazy stuff: Erlang’s Mnesia, iOS’s Core Data

Data Abstraction Layers

You should, in most circumstances, not be writing raw queries to whatever database you happen to chose to use. More likely than not, there exists a library to sit in between the DB and your application code, separating the concerns of managing concurrent database sessions and details of the schema from your main code. At the very least, you should never have raw queries or SQL inline in the middle of your application code. Rather, wrap it in a function and centralize all the functions in a file called something really obvious (e.g., “queries.py”). A line like users = load_users(), for example, is infinitely easier to read than users = db.query(“SELECT username, foo, bar from users LIMIT 10 ORDER BY ID”). This type of centralization also makes it much easier to have consistent style in your queries, and limits the number of places to go to change the queries should the schema change.

Other Common Libraries and Tools to Consider Leveraging

  • Queuing or Pub/Sub Services. Take your pick of AMQP providers, ZeroMQ, RabbitMQ, Amazon SQS
  • Storage. Amazon S3, Google Cloud Storage
  • Monitoring: Graphite/Hosted Graphite, AWS Cloud Watch, New Relic
  • Log Collection / Aggregation. LogglySplunk

Auto Scaling

  • Auto Scaling. Heroku, AWS Beanstalk, AppEngine, AWS Opsworks, Digital Ocean

Commandment #5: Don’t Cross the Streams!

There are many good models for programming designpub/subactorsMVC etc. Choose whichever you like best, and stick to it. Different kinds of logic dealing with different kinds of data should be physically isolated in the codebase (again, this separation of concerns concept and reducing cognitive load on the future-reader). The code which updates your UI should be physically distinct from the code that calculates what goes into the UI, for example.

Commandment #6: When Possible, Let the Computer Do the Work

If the compiler can catch logical errors in your code and prevent either bad behavior, bugs, or outright crashes, we absolutely should take advantage of that. Of course, some languages have compilers that make this easier than others. Haskell, for example, has a famously strict compiler that results in programmers spending most of their effort just getting code to compile. Once it compiles though, “it just works”. For those of you who’ve either never written in a strongly typed functional language this may seem ridiculous or impossible, but don’t take my word for it. Seriously, click on some of these links, it’s absolutely possible to live in a world without runtime errors. And it really is that magical.

Admittedly, not every language has a compiler or a syntax that lends itself to much (or in some cases any!) compile-time checking. For those that don’t, take a few minutes to research what optional strictness checks you can enable in your project and evaluate if they make sense for you. A short, non-comprehensive list of some common ones I’ve used lately for languages with lenient runtimes include:


This is by no means an exhaustive or the perfect list of commandments for producing “good” (i.e., easily maintainable) code. That said, if every codebase I ever had to pick up in the future followed even half of the concepts in this list, I will have many fewer gray hairs and might even be able to add an extra 5 years on the end of my life. And I’ll certainly find work more enjoyable and less stressful.

This article is from Toptal

The Most Common Mobile Apps Mistakes

The mobile app market is saturated with competition. Trends turn over quickly, but no niche can last very long without several competitors jumping onto the bandwagon. These conditions result in a high failure rate across the board for the mobile app market. Only 20% of downloaded apps see users return after the first use, whereas 3% of apps remain in use after a month.

If any part of an app is undesirable, or slow to get the hang of, users are more likely to install a new one, rather than stick it out with the imperfect product. Nothing is wasted for the consumer when disposing of an app - except for the efforts of the designers and developers, that is. So, why is it that so many apps fail? Is this a predictable phenomenon that app designers and developers should accept? For clients, is this success rate acceptable? What does it take to bring your designs into the top 3% of prosperous apps?

The common mistakes span from failing to maintain consistency throughout the lifespan of an app, to attracting users in the first place. How can apps be designed with intuitive simplicity, without becoming repetitive and boring? How can an app offer pleasing details, without losing sight of a greater purpose? Most apps live and die in the first few days, so here are the top ten most common mistakes that designers can avoid.

Only 3% of mobile apps are in use after being downloaded.

Only 3% of mobile apps are in use after being downloaded.

Common Mistake #1: A Poor First Impression

Often the first use, or first day with an app, is the most critical period to hook a potential user. The first impression is so critical that it could be an umbrella point for the rest of this top ten. If anything goes wrong, or appears confusing or boring, potential users are quickly disinterested. Although, the proper balance for first impressions is tricky to handle. In some cases, a lengthy onboarding, or process to discover necessary features can bores users. Yet, an instantly stimulating app may disregard the need for a proper tutorial, and promote confusion. Find the balance between an app that is immediately intuitive, but also introduces the users to the most exciting, engaging features quickly. Keep in mind that when users are coming to your app, they’re seeing it for the first time. Go through a proper beta testing process to learn how others perceive your app from the beginning. What seems obvious to the design team, may not be for newcomers.

Improper Onboarding

Onboarding is the step by step process of introducing a user to your app. Although it can be a good way to get someone quickly oriented, onboarding can also be a drawn out process that stands in the way of your users and their content. Often these tutorials are too long, and are likely swiped through blindly.

Sometimes, users have seen your app used in public or elsewhere, such that they get the point and just want to jump in. So, allow for a sort of quick exit strategy to avoid entirely blocking out the app upon its first use. To ensure that the onboarding process is in fact effective, consider which values this can communicate and how. The onboarding process should demonstrate the value of the app in order to hook a user, rather than just an explanation.

Go easy on the intro animation

Some designers address the issue of a good first impression with gripping intro animations to dazzle new users. But, keep in mind that every time someone wants to run the app, they’re going to have to sit through the same thing over and over. If the app serves a daily function, then this will tire your users quickly. Ten seconds of someone’s day for a logo to swipe across the screen and maybe spin a couple times don’t really seem worth it after a while.

Common Mistake #2: Designing an App Without Purpose

Avoid entering the design process without succinct intentions. Apps are often designed and developed in order to follow trends, rather than to solve a problem, fill a niche, or offer a distinct service. What is the ambition for the app? For the designer and their team, the sense of purpose will affect every step of a project. This sensibility will guide each decision from the branding or marketing of an app, to the wireframe format, and button aesthetic. If the purpose is clear, each piece of the app will communicate and function as a coherent whole. Therefore, have the design and development team continually consider their decisions within a greater goal. As the project progresses, the initial ambition may change. This is okay, as long as the vision remains coherent.

Conveying this vision to your potential users means that they will understand what value the app brings to their life. Thus, this vision is an important thing to communicate in a first impression. The question becomes how quickly can you convince users of your vision for the app? How it will improve a person’s life, or provide some sort of enjoyment or comfort. If this ambition is conveyed quickly, then as long as your app is in fact useful, it will make it into the 3%.

Often joining a pre-existing market, or app niche, means that there are apps to study while designing your own. Thus, be careful how you choose to ‘re-purpose’ what is already out there. Study the existing app market, rather than skimming over it. Then, improve upon existing products with intent, rather than thoughtlessly imitating.

Common Mistake #3: Missing Out On UX Design Mapping

Be careful not to skip over a thoughtful planning of an app’s UX architecture before jumping into design work. Even before getting to a wireframing stage, the flow and structure of an app should be mapped out. Designers are often too excited to produce aesthetics and details. This results in a culture of designers who generally under appreciate UX, and the necessary logic or navigation within an app. Slow down. Sketch out the flow of the app first before worrying too much about the finer brush strokes. Often apps fail from an overarching lack of flow and organization, rather than imperfect details. However, once the design process takes off always keep the big picture in mind. The details and aesthetic should then clearly evoke the greater concept.

Common Mistake #4: Disregarding App Development Budget

As soon as the basis of the app is sketched, this is a good time to get a budget from the development team. This way you don’t reach the end of the project and suddenly need to start cutting critical features. As your design career develops, always take note of the average costs of constructing your concepts so that your design thinking responds to economic restraints. Budgets should be useful design constraints to work within.

Many failed apps try to cram too many features in from launch.

Many failed apps try to cram too many features in from launch.

Common Mistake #5: Cramming in Design Features

Hopefully, rigorous wireframing will make the distinction between necessary and excessive functions clear. The platform is already the ultimate swiss army knife, so your app doesn’t need to be. Not only will cramming an app with features lead to a likely disorienting User Experience, but an overloaded app will also be difficult to market. If the use of the app is difficult to explain in a concise way, it’s likely trying to do too much. Paring down features is always hard, but it’s necessary. Often, the best strategy might be to gain trust in the beginning with a single or few features, then later in the life of the app can new ones be ‘tested’. This way, the additional features are less likely to interfere with the crucial first few days of an apps’ life.

Common Mistake #6: Dismissing App Context

Although the conditions of most design offices practically operate within a vacuum, app designers must be aware of wider contexts. Although purpose and ambition are important, they become irrelevant if not directed within the proper context. Remember that although you and your design team may know your app very well, and find its interfacing obvious, this may not be the case for first time users, or different demographics.

Consider the immediate context or situation in which the app is intended to be used. Given the social situation, how long might the person expect to be on the app for? What else might be helpful for them to stumble upon given the circumstance? For example, UBER’s interface excels at being used very quickly. This means that for the most part, there isn’t much room for other content. This is perfect because when a user is out with friends and needing to book a ride, your conversation is hardly interrupted in the process. UBER hides a lot of support content deep within the app, but it only appears once the scenario calls for it.

Who is the target audience for the app? How might the type of user affect how the design of the app? Perhaps, consider that an app targeted for a younger user may be able to take more liberties in assuming a certain level of intuition from the user. Whereas, many functions may need to be pointed out for a less tech savvy user. Is your app meant to be accessed quickly and for a short period of time? Or, is this an app with lots of content that allows users to stay a while? How will the design convey this form of use?

A good app design should consider the context in which it is used.

A good ap

p design should consider the context in which it is used.

Common Mistake #7: Underestimating Crossing Platforms

Often apps are developed quickly as a response to changing markets or advancing competitors. This often results in web content being dragged into the mobile platform. A constant issue, which you’d think would be widely understood by now, is that often apps and other mobile content make poor transitions between the desktop, or mobile platforms. No longer can mobile design get away with scaling down web content in the hope of getting a business quickly into the mobile market. The web to mobile transition doesn’t just mean scaling everything down, but also being able to work with less. Functions, navigation and content must all be conveyed with a more minimal strategy. Another common issue appears when an app developing team aspires to release a product simultaneously on all platforms, and through different app stores. This often results in poor compatibility, or a generally buggy, unpolished app.The gymnastics of balancing multiple platforms may be too much to add onto the launch of an app. However, it doesn’t hurt to sometimes take it slowly with one OS at a time, and iron out the major issues, before worrying about compatibility between platforms.

Common Mistake #8: Overcomplicating App Design

The famous architect Mies Van der Rohe once said, “It’s better to be good than to be unique”. Ensure that your design is meeting the brief before you start breaking the box or adding flourishes. When a designer finds themselves adding things in order to make a composition more appealing or exciting, these choices will likely lack much value. Continue to ask throughout the design process, how much can I remove? Instead of designing additively, design reductively. What isn’t needed? This method is directed as much towards content, concept and function as it is aesthetics. Over complexity is often a result of a design unnecessarily breaking conventions. Several symbols and interfaces are standard within our visual and tactile language. Will your product really benefit from reworking these standards? Standard icons have proven themselves to be universally intuitive. Thus, they are often the quickest way to provide visual cues without cluttering a screen. Don’t let your design flourishes get in the way of the actual content, or function of the app. Often, apps are not given enough white space. The need for white space is a graphic concept that has transcended both digital and print, thus it shouldn’t be underrated. Give elements on the screen room to breath so that all of the work you put into navigation and UX can be felt.

The app design process can be reductive, rather than additive.

The app design process can be reductive, rather than additive.

Common Mistake #9: Design Inconsistencies

To the point on simplicity, if a design is going to introduce new standards, they have to at least be consistent across the app. Each new function or piece of content doesn’t necessarily have to be an opportunity to introduce a new design concept. Are texts uniformly formatted? Do UI elements behave in predictable, yet pleasing ways throughout the app? Design consistency must find the balance between existing within common visual language, as well as avoiding being aesthetically stagnant. The balance between intuitive consistency and boredom is a fine line.

Common Mistake #10: Under Utilizing App Beta Testing

All designers should analyze the use of their apps with some sort of feedback loop in order to learn what is and isn’t working. A common mistake in testing is for a team to do their beta testing in-house. You need to bring in fresh eyes in order to really dig into the drafts of the app. Send out an ad for beta testers and work with a select audience before going public. This can be a great way to iron out details, edit down features, and find what’s missing. Although, beta testing can be time consuming, it may be a better alternative to developing an app that flops. Anticipate that testing often takes 8 weeks for some developers to do it properly. Avoid using friends or colleagues as testers as they may not criticize the app with the honesty that you need. Using app blogs or website to review your app is another way to test the app in a public setting without a full launch. If you’re having a hard time paring down features for your app, this is a good opportunity to see what elements matter or not.

The app design market is a battleground, so designing products which are only adequate just isn’t enough. Find a way to hook users from the beginning - communicate, and demonstrate the critical values and features as soon as you can. To be able to do this, your design team must have a coherent vision of what the app is hoping to achieve. In order to establish this ambition, a rigorous story-boarding process can iron out what is and isn’t imperative. Consider which types of users your app may best fit with. Then refine and refine until absolutely nothing else can be taken away from the project without it falling apart.

This article was written by KENT MUNDLE, Toptal Technical Editor

Rise Of Automated Trading: Machines Trading S&P 500

Nowadays, more than 60 percent of trading activities with different assets (such as stocks, index futures, commodities) are not made by “human being” traders anymore, instead relying on automated trading. There are specialized programs based on particular algorithms that automatically buy and sell assets over different markets, meant to achieve a positive return in the long run.

In this article, I’m going to show you how to predict, with good accuracy, how the next trade should be placed to get a positive gain. For this example, as the underlying asset to trade, I selected the S&P 500 index, the weighted average of 500 US companies with bigger capitalization. A very simple strategy to implement is to buy the S&P 500 index when Wall Street Exchange starts trading, at 9:30 AM, and selling it at the closing session at 4:00 PM Eastern Time. If the closing price of the index is higher than the opening price, there is a positive gain, whereas a negative gain would be achieved if the closing price is lower than the opening price. So the question is: how do we know if the trading session will end up with a closing price higher than opening price? Machine Learning is a powerful tool to achieve such a complex task, and it can be a useful tool to support us with the trading decision.

Machine Learning is the new frontier of many useful real life applications. Financial trading is one of these, and it’s used very often in this sector. An important concept about Machine Learning is that we do not need to write code for every kind of possible rules, such as pattern recognition. This is because every model associated with Machine Learning learns from the data itself, and then can be later used to predict unseen new data.

Machine Learning is the new frontier of many useful real life applications

Machine Learning is the new frontier of many useful real life applications

Disclaimer: The purpose of this article is to show how to train Machine Learning methods, and in the provided code examples not every function is explained. This article is not intended to let one copy and paste all the code and run the same provided tests, as some details are missing that were out of the scope the article. Also, base knowledge of Python is required. The main intention of the article is to show an example of how machine learning may be effective to predict buys and sells in the financial sector. However, trade with real money means to have many other skills, such as money management and risk management. This article is just a small piece of the “big picture”.

Building Your First Financial Data Automated Trading Program

So, you want to create your first program to analyze financial data and predict the right trade? Let me show you how. I will be using Python for Machine Learning code, and we will be using historical data from Yahoo Finance service. As mentioned before, historical data is necessary to train the model before making our predictions.

To begin, we need to install:

Note that only a part of GraphLab is open source, the SFrame, so to use the entire library we need a license. There is a 30 day free license and a non-commercial license for students or those one participating in Kaggle competitions. From my point of view, GraphLab Create is a very intuitive and easy to use library to analyze data and train Machine Learning models.

Digging in the Python Code

Let’s dig in with some Python code to see how to download financial data from the Internet. I suggest using IPython notebook to test the following code, because IPython has many advantages compared to a traditional IDE, especially when we need to combine source code, execution code, table data and charts together on the same document. For a brief explanation to use IPython notebook, please look at the Introduction to IPython Notebook article.

So, let’s create a new IPython notebook and write some code to download historical prices of S&P 500 index. Note, if you prefer to use other tools, you can start with a new Python project in your preferred IDE.

import graphlab as gl
from __future__ import division
from datetime import datetime
from yahoo_finance import Share
# download historical prices of S&P 500 index
today = datetime.strftime(datetime.today(), "%Y-%m-%d")
stock = Share('^GSPC') # ^GSPC is the Yahoo finance symbol to refer S&P 500 index
# we gather historical quotes from 2001-01-01 up to today
hist_quotes = stock.get_historical('2001-01-01', today)
# here is how a row looks like
{'Adj_Close': '2091.580078',
'Close': '2091.580078',
'Date': '2016-04-22',
'High': '2094.320068',
'Low': '2081.199951',
'Open': '2091.48999',
'Symbol': '%5eGSPC',
'Volume': '3790580000'}

Here, hist_quotes is a list of dictionaries, and each dictionary object is a trading day with OpenHighLowCloseAdj_closeVolumeSymbol and Date values. During each trading day, the price usually changes starting from the opening price Open to the closing price Close, and hitting a maximum and a minimum valueHigh and Low. We need to read through it and create lists of each of the most relevant data. Also, data must be ordered by the most recent values at first, so we need to reverse it:

l_date = []
l_open = []
l_high = []
l_low = []
l_close = []
l_volume = []
# reverse the list
for quotes in hist_quotes:

We can pack all downloaded quotes into an SFrame object, which is a highly scalable column based data frame, and it is compressed. One of the advantages is that it can also be larger than the amount of RAM because it is disk-backed. You can check the documentation to learn more about SFrame.

So, let’s store and then check the historical data:

qq = gl.SFrame({'datetime' : l_date, 
'open' : l_open, 
'high' : l_high, 
'low' : l_low, 
'close' : l_close, 
'volume' : l_volume})
# datetime is a string, so convert into datetime object
qq['datetime'] = qq['datetime'].apply(lambda x:datetime.strptime(x, '%Y-%m-%d'))
# just to check if data is sorted in ascending mode

close datetime high low open volume
1283.27 2001-01-02 00:00:00 1320.28 1276.05 1320.28 1129400000
1347.56 2001-01-03 00:00:00 1347.76 1274.62 1283.27 1880700000
1333.34 2001-01-04 00:00:00 1350.24 1329.14 1347.56 2131000000

Now we can save data to disk with the SFrame method save, as follows:

# once data is saved, we can use the following instruction to retrieve it 
qq = gl.SFrame(“SP500_daily.bin/”)

Let’s See What the S&P 500 Looks Like

To see how the loaded S&P 500 data will look like, we can use the following code:

import matplotlib.pyplot as plt
%matplotlib inline # only for those who are using IPython notebook

The output of the code is the following graph:

Read the full article by Andrea Nalon, a Toptal freelance developer here. 

Tackle The Most Complex Code First by Writing Tests That Matter

There are a lot of discussions, articles, and blogs around the topic of code quality. People say – use Test Driven techniques! Tests are a “must have” to start any refactoring! That’s all cool, but it’s 2016 and there is a massive volume of products and code bases still in production that were created ten, fifteen, or even twenty years ago. It’s no secret that a lot of them have legacy code with low test coverage.

While I’d like to be always at the leading, or even bleeding, edge of the technology world – engaged with new cool projects and technologies – unfortunately it’s not always possible and often I have to deal with old systems. I like to say that when you develop from scratch, you act as a creator, mastering new matter. But when you’re working on legacy code, you’re more like a surgeon – you know how the system works in general, but you never know for sure whether the patient will survive your “operation”. And since it’s legacy code, there are not many up to date tests for you to rely on. This means that very frequently one of the very first steps is to cover it with tests. More precisely, not merely to provide coverage, but to develop a test coverage strategy.

Coupling and Cyclomatic Complexity: Metrics for Smarter Test Coverage

Forget 100% coverage. Test smarter by identifying classes that are more likely to break.

Basically, what I needed to determine was what parts (classes / packages) of the system we needed to cover with tests in the first place, where we needed unit tests, where integration tests would be more helpful etc. There are admittedly many ways to approach this type of analysis and the one that I’ve used may not be the best, but it’s kind of an automatic approach. Once my approach is implemented, it takes minimal time to actually do the analysis itself and, what is more important, it brings some fun into legacy code analysis.

The main idea here is to analyse two metrics – coupling (i.e., afferent coupling, or CA) and complexity (i.e. cyclomatic complexity).

The first one measures how many classes use our class, so it basically tells us how close a particular class is to the heart of the system; the more classes there are that use our class, the more important it is to cover it with tests.

On the other hand, if a class is very simple (e.g. contains only constants), then even if it’s used by many other parts of the system, it’s not nearly as important to create a test for. Here is where the second metric can help. If a class contains a lot of logic, the Cyclomatic complexity will be high.

The same logic can also be applied in reverse; i.e., even if a class is not used by many classes and represents just one particular use case, it still makes sense to cover it with tests if its internal logic is complex.

There is one caveat though: let’s say we have two classes – one with the CA 100 and complexity 2 and the other one with the CA 60 and complexity 20. Even though the sum of the metrics is higher for the first one we should definitely cover the second one first. This is because the first class is being used by a lot of other classes, but is not very complex. On the other hand, the second class is also being used by a lot of other classes but is relatively more complex than the first class.

To summarize: we need to identify classes with high CA and Cyclomatic complexity. In mathematical terms, a fitness function is needed that can be used as a rating – f(CA,Complexity) – whose values increase along with CA and Complexity.

Generally speaking, the classes with the smallest differences between the two metrics should be given the highest priority for test coverage.

Finding tools to calculate CA and Complexity for the whole code base, and provide a simple way to extract this information in CSV format, proved to be a challenge. During my search, I came across two tools that are free so it would be unfair not to mention them:

A Bit Of Math

The main problem here is that we have two criteria – CA and Cyclomatic complexity – so we need to combine them and convert into one scalar value. If we had a slightly different task – e.g., to find a class with the worst combination of our criteria – we would have a classical multi-objective optimization problem:

We would need to find a point on the so called Pareto front (red in the picture above). What is interesting about the Pareto set is that every point in the set is a solution to the optimization task. Whenever we move along the red line we need to make a compromise between our criteria – if one gets better the other one gets worse. This is called Scalarization and the final result depends on how we do it.

There are a lot of techniques that we can use here. Each has its own pros and cons. However, the most popular ones are linear scalarizing and the one based on an reference point. Linear is the easiest one. Our fitness function will look like a linear combination of CA and Complexity:

f(CA, Complexity) = A×CA + B×Complexity

where A and B are some coefficients.

The point which represents a solution to our optimization problem will lie on the line (blue in the picture below). More precisely, it will be at the intersection of the blue line and red Pareto front. Our original problem is not exactly an optimization problem. Rather, we need to create a ranking function. Let’s consider two values of our ranking function, basically two values in our Rank column:

R1 = A∗CA + B∗Complexity and R2 = A∗CA + B∗Complexity

Both of the formulas written above are equations of lines, moreover these lines are parallel. Taking more rank values into consideration we’ll get more lines and therefore more points where the Pareto line intersects with the (dotted) blue lines. These points will be classes corresponding to a particular rank value.

Unfortunately, there is an issue with this approach. For any line (Rank value), we’ll have points with very small CA and very big Complexity (and visa versa) lying on it. This immediately puts points with a big difference between metric values in the top of the list which is exactly what we wanted to avoid.

The other way to do the scalarizing is based on the reference point. Reference point is a point with the maximum values of both criteria:

(max(CA), max(Complexity))

The fitness function will be the distance between the Reference point and the data points:

f(CA,Complexity) = √((CA−CA )2 + (Complexity−Complexity)2)

We can think about this fitness function as a circle with the center at the reference point. The radius in this case is the value of the Rank. The solution to the optimization problem will be the point where the circle touches the Pareto front. The solution to the original problem will be sets of points corresponding to the different circle radii as shown in the following picture (parts of circles for different ranks are shown as dotted blue curves):

This approach deals better with extreme values but there are still two issues: First – I’d like to have more points near the reference points to better overcome the problem that we’ve faced with linear combination. Second – CA and Cyclomatic complexity are inherently different and have different values set, so we need to normalize them (e.g. so that all the values of both metrics would be from 1 to 100).

Here is a small trick that we can apply to solve the first issue – instead of looking at the CA and Cyclomatic Complexity, we can look at their inverted values. The reference point in this case will be (0,0). To solve the second issue, we can just normalize metrics using minimum value. Here is how it looks:

Inverted and normalized complexity – NormComplexity:

(1 + min(Complexity)) / (1 + Complexity)∗100

Inverted and normalized CA – NormCA:

(1 + min(CA)) / (1+CA)∗100

Note: I added 1 to make sure that there is no division by 0.

The following picture shows a plot with the inverted values:

Final Ranking

We are now coming to the last step – calculating the rank. As mentioned, I’m using the reference point method, so the only thing that we need to do is to calculate the length of the vector, normalize it, and make it ascend with the importance of a unit test creation for a class. Here is the final formula:

Rank(NormComplexity , NormCA) = 100 − √(NormComplexity2 + NormCA2) / √2

More Statistics

There is one more thought that I’d like to add, but let’s first have a look at some statistics. Here is a histogram of the Coupling metrics:

What is interesting about this picture is the number of classes with low CA (0-2). Classes with CA 0 are either not used at all or are top level services. These represent API endpoints, so it’s fine that we have a lot of them. But classes with CA 1 are the ones that are directly used by the endpoints and we have more of these classes than endpoints. What does this mean from architecture / design perspective?

In general, it means that we have a kind of script oriented approach – we script every business case separately (we can’t really reuse the code as business cases are too diverse). If that is the case, then it’s definitely a code smell and we need to do refactoring. Otherwise, it means the cohesion of our system is low, in which case we also need refactoring, but architectural refactoring this time.

Additional useful information we can get from the histogram above is that we can completely filter out classes with low coupling (CA in {0,1}) from the list of the classes eligible for coverage with unit tests. The same classes, though, are good candidates for the integration / functional tests.

You can find all the scripts and resources that I have used in this GitHub repository: ashalitkin/code-base-stats.

Does It Always Work?

Not necessarily. First of all it’s all about static analysis, not runtime. If a class is linked from many other classes it can be a sign that it’s heavily used, but it’s not always true. For example, we don’t know whether the functionality is really heavily used by end users. Second, if the design and the quality of the system is good enough, then most likely different parts / layers of it are decoupled via interfaces so static analysis of the CA will not give us a true picture. I guess it’s one of the main reasons why CA is not that popular in tools like Sonar. Fortunately, it’s totally fine for us since, if you remember, we are interested in applying this specifically to old ugly code bases.

In general, I’d say that runtime analysis would give much better results, but unfortunately it’s much more costly, time consuming, and complex, so our approach is a potentially useful and lower cost alternative.

This article was written by Andrey Shalitkin, a Toptal Java developer.

Scaling Scala: How to Dockerize Using Kubernetes

Kubernetes is the new kid on the block, promising to help deploy applications into the cloud and scale them more quickly. Today, when developing for a microservices architecture, it’s pretty standard to choose Scala for creating API servers.

Microservices are replacing classic monolithic back-end servers with multiple independent services that communicate among themselves and have their own processes and resources.

If there is a Scala application in your plans and you want to scale it into a cloud, then you are at the right place. In this article I am going to show step-by-step how to take a generic Scala application and implement Kubernetes with Docker to launch multiple instances of the application. The final result will be a single application deployed as multiple instances, and load balanced by Kubernetes.

All of this will be implemented by simply importing the Kubernetes source kit in your Scala application. Please note, the kit hides a lot of complicated details related to installation and configuration, but it is small enough to be readable and easy to understand if you want to analyze what it does. For simplicity, we will deploy everything on your local machine. However, the same configuration is suitable for a real-world cloud deployment of Kubernetes.

Scale Your Scala Application with KubernetesBe smart and sleep tight, scale your Docker with Kubernetes.

What is Kubernetes

Before going into the gory details of the implementation, let’s discuss what Kubernetes is and why it’s important.

You may have already heard of Docker. In a sense, it is a lightweight virtual machine.

Docker gives the advantage of deploying each server in an isolated environment, very similar to a stand-alone virtual machine, without the complexity of managing a full-fledged virtual machine.

For these reasons, it is already one of the more widely used tools for deploying applications in clouds. A Docker image is pretty easy and fast to build and duplicable, much easier than a traditional virtual machine like VMWare, VirtualBox, or XEN.

Kubernetes complements Docker, offering a complete environment for managing dockerized applications. By using Kubernetes, you can easily deploy, configure, orchestrate, manage, and monitor hundreds or even thousands of Docker applications.

Kubernetes is an open source tool developed by Google and has been adopted by many other vendors. Kubernetes is available natively on the Google cloud platform, but other vendors have adopted it for their OpenShift cloud services too. It can be found on Amazon AWS, Microsoft Azure, RedHat OpenShift, and even more cloud technologies. We can say it is well positioned to become a standard for deploying cloud applications.


Now that we covered the basics, let’s check if you have all the prerequisite software installed. First of all, you need Docker. If you are using either Windows or Mac, you need the Docker Toolbox. If you are using Linux, you need to install the particular package provided by your distribution or simply follow the official directions.

We are going to code in Scala, which is a JVM language. You need, of course, the Java Development Kit and the scala SBT tool installed and available in the global path. If you are already a Scala programmer, chances are you have those tools already installed.

If you are using Windows or Mac, Docker will by default create a virtual machine named default with only 1GB of memory, which can be too small for running Kubernetes. In my experience, I had issues with the default settings. I recommend that you open the VirtualBox GUI, select your virtual machine default, and change the memory to at least to 2048MB.

VirtualBox memory settings

The Application to Clusterize

The instructions in this tutorial can apply to any Scala application or project. For this article to have some “meat” to work on, I chose an example used very often to demonstrate a simple REST microservice in Scala, called Akka HTTP. I recommend you try to apply source kit to the suggested example before attempting to use it on your application. I have tested the kit against the demo application, but I cannot guarantee that there will be no conflicts with your code.

So first, we start by cloning the demo application:

git clone https://github.com/theiterators/akka-http-microservice

Next, test if everything works correctly:

cd akka-http-microservice
sbt run

Then, access to http://localhost:9000/ip/, and you should see something like in the following image:

Akka HTTP microservice is running

Adding the Source Kit

Now, we can add the source kit with some Git magic:

git remote add ScalaGoodies https://github.com/sciabarra/ScalaGoodies
git fetch --all
git merge ScalaGoodies/kubernetes

With that, you have the demo including the source kit, and you are ready to try. Or you can even copy and paste the code from there into your application.

Once you have merged or copied the files in your projects, you are ready to start.

Starting Kubernetes

Once you have downloaded the kit, we need to download the necessary kubectl binary, by running:


This installer is smart enough (hopefully) to download the correct kubectl binary for OSX, Linux, or Windows, depending on your system. Note, the installer worked on the systems I own. Please do report any issues, so that I can fix the kit.

Once you have installed the kubectl binary, you can start the whole Kubernetes in your local Docker. Just run:


The first time it is run, this command will download the images of the whole Kubernetes stack, and a local registry needed to store your images. It can take some time, so please be patient. Also note, it needs direct accesses to the internet. If you are behind a proxy, it will be a problem as the kit does not support proxies. To solve it, you have to configure the tools like Docker, curl, and so on to use the proxy. It is complicated enough that I recommend getting a temporary unrestricted access.

Assuming you were able to download everything successfully, to check if Kubernetes is running fine, you can type the following command:

bin/kubectl get nodes

The expected answer is:

NAME        STATUS    AGE   Ready     2m

Note that age may vary, of course. Also, since starting Kubernetes can take some time, you may have to invoke the command a couple of times before you see the answer. If you do not get errors here, congratulations, you have Kubernetes up and running on your local machine.

Dockerizing Your Scala App

Now that you have Kubernetes up and running, you can deploy your application in it. In the old days, before Docker, you had to deploy an entire server for running your application. With Kubernetes, all you need to do to deploy your application is:

  • Create a Docker image.
  • Push it in a registry from where it can be launched.
  • Launch the instance with Kubernetes, that will take the image from the registry.

Luckily, it is way less complicated that it looks, especially if you are using the SBT build tool like many do.

In the kit, I included two files containing all the necessary definitions to create an image able to run Scala applications, or at least what is needed to run the Akka HTTP demo. I cannot guarantee that it will work with any other Scala applications, but it is a good starting point, and should work for many different configurations. The files to look for building the Docker image are:


Let’s have a look at what’s in them. The file project/docker.sbt contains the command to import the sbt-docker plugin:

addSbtPlugin("se.marcuslonnberg" % "sbt-docker" % "1.4.0")

This plugin manages the building of the Docker image with SBT for you. The Docker definition is in the docker.sbt file and looks like this:

imageNames in docker := Seq(ImageName("localhost:5000/akkahttp:latest"))
dockerfile in docker := {
val jarFile: File = sbt.Keys.`package`.in(Compile, packageBin).value
val classpath = (managedClasspath in Compile).value
val mainclass = mainClass.in(Compile, packageBin).value.getOrElse(sys.error("Expected exactly one main class"))
val jarTarget = s"/app/${jarFile.getName}"
val classpathString = classpath.files.map("/app/" + _.getName)
.mkString(":") + ":" + jarTarget
new Dockerfile {
add(classpath.files, "/app/")
add(jarFile, jarTarget)
entryPoint("java", "-cp", classpathString, mainclass)

To fully understand the meaning of this file, you need to know Docker well enough to understand this definition file. However, we are not going into the details of the Docker definition file, because you do not need to understand it thoroughly to build the image.

The beauty of using SBT for building the Docker image is that the SBT will take care of collecting all the files for you.

Note the classpath is automatically generated by the following command:

val classpath = (managedClasspath in Compile).value

In general, it is pretty complicated to gather all the JAR files to run an application. Using SBT, the Docker file will be generated with add(classpath.files, "/app/"). This way, SBT collects all the JAR files for you and constructs a Dockerfile to run your application.

The other commands gather the missing pieces to create a Docker image. The image will be built using an existing image APT to run Java programs (anapsix/alpine-java:8, available on the internet in the Docker Hub). Other instructions are adding the other files to run your application. Finally, by specifying an entry point, we can run it. Note also that the name starts with localhost:5000 on purpose, because localhost:5000 is where I installed the registry in the start-kube-local.sh script.

Building the Docker Image with SBT

To build the Docker image, you can ignore all the details of the Dockerfile. You just need to type:

sbt dockerBuildAndPush

The sbt-docker plugin will then build a Docker image for you, downloading from the internet all the necessary pieces, and then it will push to a Docker registry that was started before, together with the Kubernetes application in localhost. So, all you need is to wait a little bit more to have your image cooked and ready.

Note, if you experience problems, the best thing to do is to reset everything to a known state by running the following commands:


Those commands should stop all the containers and restart them correctly to get your registry ready to receive the image built and pushed by sbt.

Starting the Service in Kubernetes

Now that the application is packaged in a container and pushed in a registry, we are ready to use it. Kubernetes uses the command line and configuration files to manage the cluster. Since command lines can become very long, and also be able to replicate the steps, I am using the configurations files here. All the samples in the source kit are in the folder kube.

Our next step is to launch a single instance of the image. A running image is called, in the Kubernetes language, a pod. So let’s create a pod by invoking the following command:

bin/kubectl create -f kube/akkahttp-pod.yml

You can now inspect the situation with the command:

bin/kubectl get pods

You should see:

NAME                   READY     STATUS    RESTARTS   AGE
akkahttp               1/1       Running   0          33s
k8s-etcd-     1/1       Running   0          7d
k8s-master-   4/4       Running   0          7d
k8s-proxy-    1/1       Running   0          7d

Status actually can be different, for example, “ContainerCreating”, it can take a few seconds before it becomes “Running”. Also, you can get another status like “Error” if, for example, you forget to create the image before.

You can also check if your pod is running with the command:

bin/kubectl logs akkahttp

You should see an output ending with something like this:

[DEBUG] [05/30/2016 12:19:53.133] [default-akka.actor.default-dispatcher-5] [akka://default/system/IO-TCP/selectors/$a/0] Successfully bound to /0:0:0:0:0:0:0:0:9000

Now you have the service up and running inside the container. However, the service is not yet reachable. This behavior is part of the design of Kubernetes. Your pod is running, but you have to expose it explicitly. Otherwise, the service is meant to be internal.

Creating a Service

Creating a service and checking the result is a matter of executing:

bin/kubectl create -f kube/akkahttp-service.yaml
bin/kubectl get svc

You should see something like this:

akkahttp-service                  9000/TCP   44s
kubernetes     <none>        443/TCP    3m

Note that the port can be different. Kubernetes allocated a port for the service and started it. If you are using Linux, you can directly open the browser and type to see the result. If you are using Windows or Mac with Docker Toolbox, the IP is local to the virtual machine that is running Docker, and unfortunately it is still unreachable.

I want to stress here that this is not a problem of Kubernetes, rather it is a limitation of the Docker Toolbox, which in turn depends on the constraints imposed by virtual machines like VirtualBox, which act like a computer within another computer. To overcome this limitation, we need to create a tunnel. To make things easier, I included another script which opens a tunnel on an arbitrary port to reach any service we deployed. You can type the following command:

bin/forward-kube-local.sh akkahttp-service 9000

Note that the tunnel will not run in the background, you have to keep the terminal window open as long as you need it and close when you do not need the tunnel anymore. While the tunnel is running, you can open: http://localhost:9000/ip/ and finally see the application running in Kubernetes.

Final Touch: Scale

So far we have “simply” put our application in Kubernetes. While it is an exciting achievement, it does not add too much value to our deployment. We’re saved from the effort of uploading and installing on a server and configuring a proxy server for it.

Where Kubernetes shines is in scaling. You can deploy two, ten, or one hundred instances of our application by only changing the number of replicas in the configuration file. So let’s do it.

We are going to stop the single pod and start a deployment instead. So let’s execute the following commands:

bin/kubectl delete -f kube/akkahttp-pod.yml
bin/kubectl create -f kube/akkahttp-deploy.yaml

Next, check the status. Again, you may try a couple of times because the deployment can take some time to be performed:

NAME                                   READY     STATUS    RESTARTS   AGE
akkahttp-deployment-4229989632-mjp6u   1/1       Running   0          16s
akkahttp-deployment-4229989632-s822x   1/1       Running   0          16s
k8s-etcd-                     1/1       Running   0          6d
k8s-master-                   4/4       Running   0          6d
k8s-proxy-                    1/1       Running   0          6d

Now we have two pods, not one. This is because in the configuration file I provided, there is the value replica: 2, with two different names generated by the system. I am not going into the details of the configuration files, because the scope of the article is simply an introduction for Scala programmers to jump-start into Kubernetes.

Anyhow, there are now two pods active. What is interesting is that the service is the same as before. We configured the service to load balance between all the pods labeled akkahttp. This means we do not have to redeploy the service, but we can replace the single instance with a replicated one.

We can verify this by launching the proxy again (if you are on Windows and you have closed it):

bin/forward-kube-local.sh akkahttp-service 9000

Then, we can try to open two terminal windows and see the logs for each pod. For example, in the first type:

bin/kubectl logs -f akkahttp-deployment-4229989632-mjp6u

And in the second type:

bin/kubectl logs -f akkahttp-deployment-4229989632-s822x

Read the full article on Toptal.

Clustering Algorithms: From Start To State Of The Art

It’s not a bad time to be a Data Scientist. Serious people may find interest in you if you turn the conversation towards “Big Data”, and the rest of the party crowd will be intrigued when you mention “Artificial Intelligence” and “Machine Learning”. Even Google thinks you’re not bad, and that you’re getting even better. There are a lot of ‘smart’ algorithms that help data scientists do their wizardry. It may all seem complicated, but if we understand and organize algorithms a bit, it’s not even that hard to find and apply the one that we need.

Courses on data mining or machine learning will usually start with clustering, because it is both simple and useful. It is an important part of a somewhat wider area of Unsupervised Learning, where the data we want to describe is not labeled. In most cases this is where the user did not give us much information of what is the expected output. The algorithm only has the data and it should do the best it can. In our case, it should perform clustering – separating data into groups (clusters) that contain similar data points, while the dissimilarity between groups is as high as possible. Data points can represent anything, such as our clients. Clustering can be useful if we, for example, want to group similar users and then run different marketing campaigns on each cluster.

K-Means Clustering

After the necessary introduction, Data Mining courses always continue with K-Means; an effective, widely used, all-around clustering algorithm. Before actually running it, we have to define a distance function between data points (for example, Euclidean distance if we want to cluster points in space), and we have to set the number of clusters we want (k).

The algorithm begins by selecting k points as starting centroids (‘centers’ of clusters). We can just select any k random points, or we can use some other approach, but picking random points is a good start. Then, we iteratively repeat two steps:

  1. Assignment step: each of m points from our dataset is assigned to a cluster that is represented by the closest of the k centroids. For each point, we calculate distances to each centroid, and simply pick the least distant one.

  2. Update step: for each cluster, a new centroid is calculated as the mean of all points in the cluster. From the previous step, we have a set of points which are assigned to a cluster. Now, for each such set, we calculate a mean that we declare a new centroid of the cluster.

After each iteration, the centroids are slowly moving, and the total distance from each point to its assigned centroid gets lower and lower. The two steps are alternated until convergence, meaning until there are no more changes in cluster assignment. After a number of iterations, the same set of points will be assigned to each centroid, therefore leading to the same centroids again. K-Means is guaranteed to converge to a local optimum. However, that does not necessarily have to be the best overall solution (global optimum).

The final clustering result can depend on the selection of initial centroids, so a lot of thought has been given to this problem. One simple solution is just to run K-Means a couple of times with random initial assignments. We can then select the best result by taking the one with the minimal sum of distances from each point to its cluster – the error value that we are trying to minimize in the first place.

Other approaches to selecting initial points can rely on selecting distant points. This can lead to better results, but we may have a problem with outliers, those rare alone points that are just “off” that may just be some errors. Since they are far from any meaningful cluster, each such point may end up being its own ‘cluster’. A good balance is K-Means++ variant [Arthur and Vassilvitskii, 2007], whose initialization will still pick random points, but with probability proportional to square distance from the previously assigned centroids. Points that are further away will have higher probability to be selected as starting centroids. Consequently, if there’s a group of points, the probability that a point from the group will be selected also gets higher as their probabilities add up, resolving the outlier problem we mentioned.

K-Means++ is also the default initialization for Python’s Scikit-learn K-Means implementation. If you’re using Python, this may be your library of choice. For Java, Weka library may be a good start:

Java (Weka)

// Load some data
Instances data = DataSource.read("data.arff");
// Create the model
SimpleKMeans kMeans = new SimpleKMeans();
// We want three clusters
// Run K-Means
// Print the centroids
Instances centroids = kMeans.getClusterCentroids();
for (Instance centroid: centroids) {
// Print cluster membership for each instance
for (Instance point: data) {
System.out.println(point + " is in cluster " + kMeans.clusterInstance(point));

Python (Scikit-learn)

>>> from sklearn import cluster, datasets
>>> iris = datasets.load_iris()
>>> X_iris = iris.data
>>> y_iris = iris.target
>>> k_means = cluster.KMeans(n_clusters=3)
>>> k_means.fit(X_iris)
KMeans(copy_x=True, init='k-means++', ...
>>> print(k_means.labels_[::10])
[1 1 1 1 1 0 0 0 0 0 2 2 2 2 2]
>>> print(y_iris[::10])
[0 0 0 0 0 1 1 1 1 1 2 2 2 2 2]

In the Python example above, we used a standard example dataset ‘Iris’, which contains flower petal and sepal dimensions for three different species of iris. We clustered these into three clusters, and compared the obtained clusters to the actual species (target), to see that they match perfectly.

In this case, we knew that there were three different clusters (species), and K-Means recognized correctly which ones go together. But, how do we choose a good number of clusters k in general? These kind of questions are quite common in Machine Learning. If we request more clusters, they will be smaller, and therefore the total error (total of distances from points to their assigned clusters) will be smaller. So, is it a good idea just to set a bigger k? We may end with having k = m, that is, each point being its own centroid, with each cluster having only one point. Yes, the total error is 0, but we didn’t get a simpler description of our data, nor is it general enough to cover some new points that may appear. This is called overfitting, and we don’t want that.

A way to deal with this problem is to include some penalty for a larger number of clusters. So, we are now trying to minimize not only the error, but error + penalty. The error will just converge towards zero as we increase the number of clusters, but the penalty will grow. At some points, the gain from adding another cluster will be less than the introduced penalty, and we’ll have the optimal result. A solution that usesBayesian Information Criterion (BIC) for this purpose is called X-Means [Pelleg and Moore, 2000].

Another thing we have to define properly is the distance function. Sometimes that’s a straightforward task, a logical one given the nature of data. For points in space, Euclidean distance is an obvious solution, but it may be tricky for features of different ‘units’, for discrete variables, etc. This may require a lot of domain knowledge. Or, we can call Machine Learning for help. We can actually try to learn the distance function. If we have a training set of points that we know how they should be grouped (i.e. points labeled with their clusters), we can use supervised learning techniques to find a good function, and then apply it to our target set that is not yet clustered.

The method used in K-Means, with its two alternating steps resembles an Expectation–Maximization (EM) method. Actually, it can be considered a very simple version of EM. However, it should not be confused with the more elaborate EM clustering algorithm even though it shares some of the same principles.

EM Clustering

So, with K-Means clustering each point is assigned to just a single cluster, and a cluster is described only by its centroid. This is not too flexible, as we may have problems with clusters that are overlapping, or ones that are not of circular shape. With EM Clustering, we can now go a step further and describe each cluster by its centroid (mean), covariance (so that we can have elliptical clusters), and weight (the size of the cluster). The probability that a point belongs to a cluster is now given by a multivariate Gaussian probability distribution (multivariate - depending on multiple variables). That also means that we can calculate the probability of a point being under a Gaussian ‘bell’, i.e. the probability of a point belonging to a cluster.

We now start the EM procedure by calculating, for each point, the probabilities of it belonging to each of the current clusters (which, again, may be randomly created at the beginning). This is the E-step. If one cluster is a really good candidate for a point, it will have a probability close to one. However, two or more clusters can be acceptable candidates, so the point has a distribution of probabilities over clusters. This property of the algorithm, of points not belonging restricted to one cluster is called “soft clustering”.

The M-step now recalculates the parameters of each cluster, using the assignments of points to the previous set of clusters. To calculate the new mean, covariance and weight of a cluster, each point data is weighted by its probability of belonging to the cluster, as calculated in the previous step.

Alternating these two steps will increase the total log-likelihood until it converges. Again, the maximum may be local, so we can run the algorithm several times to get better clusters.

If we now want to determine a single cluster for each point, we may simply choose the most probable one. Having a probability model, we can also use it to generate similar data, that is to sample more points that are similar to the data that we observed.

Affinity Propagation

Affinity Propagation (AP) was published by Frey and Dueck in 2007, and is only getting more and more popular due to its simplicity, general applicability, and performance. It is changing its status from state of the art to de facto standard.

The main drawbacks of K-Means and similar algorithms are having to select the number of clusters, and choosing the initial set of points. Affinity Propagation, instead, takes as input measures of similarity between pairs of data points, and simultaneously considers all data points as potential exemplars. Real-valued messages are exchanged between data points until a high-quality set of exemplars and corresponding clusters gradually emerges.

As an input, the algorithm requires us to provide two sets of data:

  1. Similarities between data points, representing how well-suited a point is to be another one’s exemplar. If there’s no similarity between two points, as in they cannot belong to the same cluster, this similarity can be omitted or set to -Infinity depending on implementation.

  2. Preferences, representing each data point’s suitability to be an exemplar. We may have some a priori information which points could be favored for this role, and so we can represent it through preferences.

Both similarities and preferences are often represented through a single matrix, where the values on the main diagonal represent preferences. Matrix representation is good for dense datasets. Where connections between points are sparse, it is more practical not to store the whole n x n matrix in memory, but instead keep a list of similarities to connected points. Behind the scene, ‘exchanging messages between points’ is the same thing as manipulating matrices, and it’s only a matter of perspective and implementation.

The algorithm then runs through a number of iterations, until it converges. Each iteration has two message-passing steps:

  1. Calculating responsibilities: Responsibility r(i, k) reflects the accumulated evidence for how well-suited point k is to serve as the exemplar for point i, taking into account other potential exemplars for point i. Responsibility is sent from data point i to candidate exemplar point k.

  2. Calculating availabilities: Availability a(i, k) reflects the accumulated evidence for how appropriate it would be for point i to choose point k as its exemplar, taking into account the support from other points that point k should be an exemplar. Availability is sent from candidate exemplar point k to point i.

In order to calculate responsibilities, the algorithm uses original similarities and availabilities calculated in the previous iteration (initially, all availabilities are set to zero). Responsibilities are set to the input similarity between point i and point k as its exemplar, minus the largest of the similarity and availability sum between point i and other candidate exemplars. The logic behind calculating how suitable a point is for an exemplar is that it is favored more if the initial a priori preference was higher, but the responsibility gets lower when there is a similar point that considers itself a good candidate, so there is a ‘competition’ between the two until one is decided in some iteration.

Calculating availabilities, then, uses calculated responsibilities as evidence whether each candidate would make a good exemplar. Availability a(i, k) is set to the self-responsibility r(k, k) plus the sum of the positive responsibilities that candidate exemplar k receives from other points.

Finally, we can have different stopping criteria to terminate the procedure, such as when changes in values fall below some threshold, or the maximum number of iterations is reached. At any point through Affinity Propagation procedure, summing Responsibility (r) and Availability (a) matrices gives us the clustering information we need: for point i, the k with maximum r(i, k) + a(i, k) represents point i’s exemplar. Or, if we just need the set of exemplars, we can scan the main diagonal. If r(i, i) + a(i, i) > 0, point i is an exemplar.

We’ve seen that with K-Means and similar algorithms, deciding the number of clusters can be tricky. With AP, we don’t have to explicitly specify it, but it may still need some tuning if we obtain either more or less clusters than we may find optimal. Luckily, just by adjusting the preferences we can lower or raise the number of clusters. Setting preferences to a higher value will lead to more clusters, as each point is ‘more certain’ of its suitability to be an exemplar and is therefore harder to ‘beat’ and include it under some other point’s ‘domination’. Conversely, setting lower preferences will result in having less clusters; as if they’re saying “no, no, please, you’re a better exemplar, I’ll join your cluster”. As a general rule, we may set all preferences to the median similarity for a medium to large number of clusters, or to the lowest similarity for a moderate number of clusters. However, a couple of runs with adjusting preferences may be needed to get the result that exactly suits our needs.

Hierarchical Affinity Propagation is also worth mentioning, as a variant of the algorithm that deals with quadratic complexity by splitting the dataset into a couple of subsets, clustering them separately, and then performing the second level of clustering.

In The End…

There’s a whole range of clustering algorithms, each one with its pros and cons regarding what type of data they with, time complexity, weaknesses, and so on. To mention more algorithms, for example there’s Hierarchical Agglomerative Clustering (or Linkage Clustering), good for when we don’t necessarily have circular (or hyper-spherical) clusters, and don’t know the number of clusters in advance. It starts with each point being a separate cluster, and works by joining two closest clusters in each step until everything is in one big cluster.

With Hierarchical Agglomerative Clustering, we can easily decide the number of clusters afterwards by cutting the dendrogram (tree diagram) horizontally where we find suitable. It is also repeatable (always gives the same answer for the same dataset), but is also of a higher complexity (quadratic).

Then, DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is also an algorithm worth mentioning. It groups points that are closely packed together, expanding clusters in any direction where there are nearby points, thus dealing with different shapes of clusters.

These algorithms deserve an article of their own, and we can explore them in a separate post later on.

It takes experience with some trial and error to know when to use one algorithm or the other. Luckily, we have a range of implementations in different programming languages, so trying them out only requires a little willingness to play.

 This article was written by Lovro Iliassich, a Toptal Java developer. 

Declarative Programming: Is It A Real Thing?

Declarative programming is, currently, the dominant paradigm of an extensive and diverse set of domains such as databases, templating and configuration management.

In a nutshell, declarative programming consists of instructing a program on what needs to be done, instead of telling it how to do it. In practice, this approach entails providing a domain-specific language (DSL) for expressing what the user wants, and shielding them from the low-level constructs (loops, conditionals, assignments) that materialize the desired end state.

While this paradigm is a remarkable improvement over the imperative approach that it replaced, I contend that declarative programming has significant limitations, limitations that I explore in this article. Moreover, I propose a dual approach that captures the benefits of declarative programming while superseding its limitations.

Read the full article on Toptal

Clean Code and The Art of Exception Handling

Exceptions are as old as programming itself. Back in the days when programming was done in hardware, or via low-level programming languages, exceptions were used to alter the flow of the program, and to avoid hardware failures. Today, Wikipedia defines exceptions as:

anomalous or exceptional conditions requiring special processing – often changing the normal flow of program execution…

And that handling them requires:

specialized programming language constructs or computer hardware mechanisms.

So, exceptions require special treatment, and an unhandled exception may cause unexpected behavior. The results are often spectacular. In 1996, the famous Ariane 5 rocket launch failure was attributed to an unhandled overflow exception. History’s Worst Software Bugs contains some other bugs that could be attributed to unhandled or miss-handled exceptions.

Over time, these errors, and countless others (that were, perhaps, not as dramatic, but still catastrophic for those involved) contributed to the impression that exceptions are bad.

The results of improperly handling exceptions have led us to believe that exceptions are always bad.

But exceptions are a fundamental element of modern programming; they exist to make our software better. Rather than fearing exceptions, we should embrace them and learn how to benefit from them. In this article, we will discuss how to manage exceptions elegantly, and use them to write clean code that is more maintainable.

Exception Handling: It’s a Good Thing

With the rise of object-oriented programming (OOP), exception support has become a crucial element of modern programming languages. A robust exception handling system is built into most languages, nowadays. For example, Ruby provides for the following typical pattern:

rescue SpecificError => e
retry if some_condition_met?

There is nothing wrong with the previous code. But overusing these patterns will cause code smells, and won’t necessarily be beneficial. Likewise, misusing them can actually do a lot of harm to your code base, making it brittle, or obfuscating the cause of errors.

The stigma surrounding exceptions often makes programmers feel at a loss. It’s a fact of life that exceptions can’t be avoided, but we are often taught they must be dealt with swiftly and decisively. As we will see, this is not necessarily true. Rather, we should learn the art of handling exceptions gracefully, making them harmonious with the rest of our code.

Following are some recommended practices that will help you embrace exceptions and make use of them and their abilities to keep your code maintainableextensible, and readable:

  • maintainability: Allows us to easily find and fix new bugs, without the fear of breaking current functionality, introducing further bugs, or having to abandon the code altogether due to increased complexity over time.
  • extensibility: Allows us to easily add to our code base, implementing new or changed requirements without breaking existing functionality. Extensibility provides flexibility, and enables a high level of reusability for our code base.
  • readability: Allows us to easily read the code and discover it’s purpose without spending too much time digging. This is critical for efficiently discovering bugs and untested code.

These elements are the main factors of what we might call cleanliness or quality, which is not a direct measure itself, but instead is the combined effect of the previous points, as demonstrated in this comic:

"WTFs/m" by Thom Holwerda, OSNews

With that said, let’s dive into these practices and see how each of them affects those three measures.

Note: We will present examples from Ruby, but all of the constructs demonstrated here have equivalents in the most common OOP languages.

Always create your own ApplicationError hierarchy

Most languages come with a variety of exception classes, organized in an inheritance hierarchy, like any other OOP class. To preserve the readability, maintainability, and extensibility of our code, it’s a good idea to create our own subtree of application-specific exceptions that extend the base exception class. Investing some time in logically structuring this hierarchy can be extremely beneficial. For example:

class ApplicationError < StandardError; end
# Validation Errors
class ValidationError < ApplicationError; end
class RequiredFieldError < ValidationError; end
class UniqueFieldError < ValidationError; end
# HTTP 4XX Response Errors
class ResponseError < ApplicationError; end
class BadRequestError < ResponseError; end
class UnauthorizedError < ResponseError; end
# ...

Example of an application exception hierarchy.

Having an extensible, comprehensive exceptions package for our application makes handling these application-specific situations much easier. For example, we can decide which exceptions to handle in a more natural way. This not only boosts the readability of our code, but also increases the maintainability of our applications and libraries (gems).

From the readability perspective, it’s much easier to read:

rescue ValidationError => e

Than to read:

rescue RequiredFieldError, UniqueFieldError, ... => e

From the maintainability perspective, say, for example, we are implementing a JSON API, and we have defined our own ClientError with several subtypes, to be used when a client sends a bad request. If any one of these is raised, the application should render the JSON representation of the error in its response. It will be easier to fix, or add logic, to a single block that handles ClientErrors rather than looping over each possible client error and implementing the same handler code for each. In terms of extensibility, if we later have to implement another type of client error, we can trust it will already be handled properly here.

Moreover, this does not prevent us from implementing additional special handling for specific client errors earlier in the call stack, or altering the same exception object along the way:

# app/controller/pseudo_controller.rb
def authenticate_user!
fail AuthenticationError if token_invalid? || token_expired?
User.find_by(authentication_token: token)
rescue AuthenticationError => e
report_suspicious_activity if token_invalid?
raise e
def show
rescue ClientError => e

As you can see, raising this specific exception didn’t prevent us from being able to handle it on different levels, altering it, re-raising it, and allowing the parent class handler to resolve it.

Two things to note here:

  • Not all languages support raising exceptions from within an exception handler.
  • In most languages, raising a new exception from within a handler will cause the original exception to be lost forever, so it’s better to re-raise the same exception object (as in the above example) to avoid losing track of the original cause of the error. (Unless you are doing this intentionally).

Never rescue Exception

That is, never try to implement a catch-all handler for the base exception type. Rescuing or catching all exceptions wholesale is never a good idea in any language, whether it’s globally on a base application level, or in a small buried method used only once. We don’t want to rescue Exception because it will obfuscate whatever really happened, damaging both maintainability and extensibility. We can waste a huge amount of time debugging what the actual problem is, when it could be as simple as a syntax error:

# main.rb
def bad_example
rescue Exception
# elsewhere.rb
def i_might_raise_exception!
retrun do_a_lot_of_work!

You might have noticed the error in the previous example; return is mistyped. Although modern editors provide some protection against this specific type of syntax error, this example illustrates how rescue Exception does harm to our code. At no point is the actual type of the exception (in this case a NoMethodError) addressed, nor is it ever exposed to the developer, which may cause us to waste a lot of time running in circles.

Never rescue more exceptions than you need to

The previous point is a specific case of this rule: We should always be careful not to over-generalize our exception handlers. The reasons are the same; whenever we rescue more exceptions than we should, we end up hiding parts of the application logic from higher levels of the application, not to mention suppressing the developer’s ability to handle the exception his or herself. This severely affects the extensibility and maintainability of the code.

If we do attempt to handle different exception subtypes in the same handler, we introduce fat code blocks that have too many responsibilities. For example, if we are building a library that consumes a remote API, handling a MethodNotAllowedError (HTTP 405), is usually different from handling an UnauthorizedError (HTTP 401), even though they are both ResponseErrors.

As we will see, often there exists a different part of the application that would be better suited to handle specific exceptions in a more DRY way.

So, define the single responsibility of your class or method, and handle the bare minimum of exceptions that satisfy this responsibility requirement. For example, if a method is responsible for getting stock info from a remote a API, then it should handle exceptions that arise from getting that info only, and leave the handling of the other errors to a different method designed specifically for these responsibilities:

def get_info
response = HTTP.get(STOCKS_URL + "#{@symbol}/info")
fail AuthenticationError if response.code == 401
fail StockNotFoundError, @symbol if response.code == 404
return JSON.parse response.body
rescue JSON::ParserError

Here we defined the contract for this method to only get us the info about the stock. It handles endpoint-specific errors, such as an incomplete or malformed JSON response. It doesn’t handle the case when authentication fails or expires, or if the stock doesn’t exist. These are someone else’s responsibility, and are explicitly passed up the call stack where there should be a better place to handle these errors in a DRY way.

Resist the urge to handle exceptions immediately

This is the complement to the last point. An exception can be handled at any point in the call stack, and any point in the class hierarchy, so knowing exactly where to handle it can be mystifying. To solve this conundrum, many developers opt to handle any exception as soon as it arises, but investing time in thinking this through will usually result in finding a more appropriate place to handle specific exceptions.

One common pattern that we see in Rails applications (especially those that expose JSON-only APIs) is the following controller method:

# app/controllers/client_controller.rb
def create
@client = Client.new(params[:client])
if @client.save
render json: @client
render json: @client.errors

(Note that although this is not technically an exception handler, functionally, it serves the same purpose, since @client.save only returns false when it encounters an exception.)

In this case, however, repeating the same error handler in every controller action is the opposite of DRY, and damages maintainability and extensibility. Instead, we can make use of the special nature of exception propagation, and handle them only once, in the parent controller classApplicationController:

# app/controllers/client_controller.rb
def create
@client = Client.create!(params[:client])
render json: @client

# app/controller/application_controller.rb
rescue_from ActiveRecord::RecordInvalid, with: :render_unprocessable_entity
def render_unprocessable_entity(e)
render \
json: { errors: e.record.errors },
status: 422

This way, we can ensure that all of the ActiveRecord::RecordInvalid errors are properly and DRY-ly handled in one place, on the base ApplicationController level. This gives us the freedom to fiddle with them if we want to handle specific cases at the lower level, or simply let them propagate gracefully.

Not all exceptions need handling

When developing a gem or a library, many developers will try to encapsulate the functionality and not allow any exception to propagate out of the library. But sometimes, it’s not obvious how to handle an exception until the specific application is implemented.

Let’s take ActiveRecord as an example of the ideal solution. The library provides developers with two approaches for completeness. The save method handles exceptions without propagating them, simply returning false, while save! raises an exception when it fails. This gives developers the option of handling specific error cases differently, or simply handling any failure in a general way.

But what if you don’t have the time or resources to provide such a complete implementation? In that case, if there is any uncertainty, it is best to expose the exception, and release it into the wild.

Sometimes the best way to handle an exception is to let it fly free.

Here’s why: We are working with moving requirements almost all the time, and making the decision that an exception will always be handled in a specific way might actually harm our implementation, damaging extensibility and maintainability, and potentially adding huge technical debt, especially when developing libraries.

Take the earlier example of a stock API consumer fetching stock prices. We chose to handle the incomplete and malformed response on the spot, and we chose to retry the same request again until we got a valid response. But later, the requirements might change, such that we must fall back to saved historical stock data, instead of retrying the request.

At this point, we will be forced to change the library itself, updating how this exception is handled, because the dependent projects won’t handle this exception. (How could they? It was never exposed to them before.) We will also have to inform the owners of projects that rely on our library. This might become a nightmare if there are many such projects, since they are likely to have been built on the assumption that this error will be handled in a specific way.

Now, we can see where we are heading with dependencies management. The outlook is not good. This situation happens quite often, and more often than not, it degrades the library’s usefulness, extensibility, and flexibility.

So here is the bottom line: if it is unclear how an exception should be handled, let it propagate gracefully. There are many cases where a clear place exists to handle the exception internally, but there are many other cases where exposing the exception is better. So before you opt into handling the exception, just give it a second thought. A good rule of thumb is to only insist on handling exceptions when you are interacting directly with the end-user.

Follow the convention

The implementation of Ruby, and, even more so, Rails, follows some naming conventions, such as distinguishing between method_names and method_names! with a “bang.” In Ruby, the bang indicates that the method will alter the object that invoked it, and in Rails, it means that the method will raise an exception if it fails to execute the expected behavior. Try to respect the same convention, especially if you are going to open-source your library.

If we were to write a new method! with a bang in a Rails application, we must take these conventions into account. There is nothing forcing us to raise an exception when this method fails, but by deviating from the convention, this method may mislead programmers into believing they will be given the chance to handle exceptions themselves, when, in fact, they will not.

Another Ruby convention, attributed to Jim Weirich, is to use fail to indicate method failure, and only to use raise if you are re-raising the exception.

“An aside, because I use exceptions to indicate failures, I almost always use the “fail” keyword rather than the “raise” keyword in Ruby. Fail and raise are synonyms so there is no difference except that “fail” more clearly communicates that the method has failed. The only time I use “raise” is when I am catching an exception and re-raising it, because here I’m not failing, but explicitly and purposefully raising an exception. This is a stylistic issue I follow, but I doubt many other people do.”

Many other language communities have adopted conventions like these around how exceptions are treated, and ignoring these conventions will damage the readability and maintainability of our code.


This practice doesn’t solely apply to exceptions, of course, but if there’s one thing that should always be logged, it’s an exception.

Logging is extremely important (important enough for Ruby to ship a logger with its standard version). It’s the diary of our applications, and even more important than keeping a record of how our applications succeed, is logging how and when they fail.

There is no shortage of logging libraries or log-based services and design patterns. It’s critical to keep track of our exceptions so we can review what happened and investigate if something doesn’t look right. Proper log messages can point developers directly to the cause of a problem, saving them immeasurable time.

That Clean Code Confidence

Proper exception handling allows for clean code and successful software.

Clean exception handling will send your code quality to the moon!

Exceptions are a fundamental part of every programming language. They are special and extremely powerful, and we must leverage their power to elevate the quality of our code instead of exhausting ourselves fighting with them.

In this article, we dived into some good practices for structuring our exception trees and how it can be beneficial for readability and quality to logically structure them. We looked at different approaches for handling exceptions, either in one place or on multiple levels.

We saw that it’s bad to “catch ‘em all”, and that it’s ok to let them float around and bubble up.

We looked at where to handle exceptions in a DRY manner, and learned that we are not obligated to handle them when or where they first arise.

We discussed when exactly it is a good idea to handle them, when it’s a bad idea, and why, when in doubt, it’s a good idea to let them propagate.

Finally, we discussed other points that can help maximize the usefulness of exceptions, such as following conventions and logging everything.

With these basic guidelines, we can feel much more comfortable and confident dealing with error cases in our code, and making our exceptions truly exceptional!

Special thank to Avdi Grimm and his awesome talk Exceptional Ruby, which helped a lot in the making of this article.

This article was written by AHMED ABDELRAZZAK, a Toptal SQL developer.

Introduction To Concurrent Programming: A Beginner's Guide

What is concurrent programing? Simply described, it’s when you are doing more than one thing at the same time. Not to be confused with parallelism, concurrency is when multiple sequences of operations are run in overlapping periods of time. In the realm of programming, concurrency is a pretty complex subject. Dealing with constructs such as threads and locks and avoiding issues like race conditions and deadlocks can be quite cumbersome, making concurrent programs difficult to write. Through concurrency, programs can be designed as independent processes working together in a specific composition. Such a structure may or may not be made parallel; however, achieving such a structure in your program offers numerous advantages.

Introduction To Concurrent Programming

Read the full article on Toptal.

Copyright(c) 2017 - PythonBlogs.com
By using this website, you signify your acceptance of Terms and Conditions and Privacy Policy
All rights reserved