Rise Of Automated Trading: Machines Trading S&P 500

Nowadays, more than 60 percent of trading activities with different assets (such as stocks, index futures, commodities) are not made by “human being” traders anymore, instead relying on automated trading. There are specialized programs based on particular algorithms that automatically buy and sell assets over different markets, meant to achieve a positive return in the long run.

In this article, I’m going to show you how to predict, with good accuracy, how the next trade should be placed to get a positive gain. For this example, as the underlying asset to trade, I selected the S&P 500 index, the weighted average of 500 US companies with bigger capitalization. A very simple strategy to implement is to buy the S&P 500 index when Wall Street Exchange starts trading, at 9:30 AM, and selling it at the closing session at 4:00 PM Eastern Time. If the closing price of the index is higher than the opening price, there is a positive gain, whereas a negative gain would be achieved if the closing price is lower than the opening price. So the question is: how do we know if the trading session will end up with a closing price higher than opening price? Machine Learning is a powerful tool to achieve such a complex task, and it can be a useful tool to support us with the trading decision.

Machine Learning is the new frontier of many useful real life applications. Financial trading is one of these, and it’s used very often in this sector. An important concept about Machine Learning is that we do not need to write code for every kind of possible rules, such as pattern recognition. This is because every model associated with Machine Learning learns from the data itself, and then can be later used to predict unseen new data.

Machine Learning is the new frontier of many useful real life applications

Machine Learning is the new frontier of many useful real life applications

Disclaimer: The purpose of this article is to show how to train Machine Learning methods, and in the provided code examples not every function is explained. This article is not intended to let one copy and paste all the code and run the same provided tests, as some details are missing that were out of the scope the article. Also, base knowledge of Python is required. The main intention of the article is to show an example of how machine learning may be effective to predict buys and sells in the financial sector. However, trade with real money means to have many other skills, such as money management and risk management. This article is just a small piece of the “big picture”.

Building Your First Financial Data Automated Trading Program

So, you want to create your first program to analyze financial data and predict the right trade? Let me show you how. I will be using Python for Machine Learning code, and we will be using historical data from Yahoo Finance service. As mentioned before, historical data is necessary to train the model before making our predictions.

To begin, we need to install:

Note that only a part of GraphLab is open source, the SFrame, so to use the entire library we need a license. There is a 30 day free license and a non-commercial license for students or those one participating in Kaggle competitions. From my point of view, GraphLab Create is a very intuitive and easy to use library to analyze data and train Machine Learning models.

Digging in the Python Code

Let’s dig in with some Python code to see how to download financial data from the Internet. I suggest using IPython notebook to test the following code, because IPython has many advantages compared to a traditional IDE, especially when we need to combine source code, execution code, table data and charts together on the same document. For a brief explanation to use IPython notebook, please look at the Introduction to IPython Notebook article.

So, let’s create a new IPython notebook and write some code to download historical prices of S&P 500 index. Note, if you prefer to use other tools, you can start with a new Python project in your preferred IDE.

import graphlab as gl
from __future__ import division
from datetime import datetime
from yahoo_finance import Share
# download historical prices of S&P 500 index
today = datetime.strftime(datetime.today(), "%Y-%m-%d")
stock = Share('^GSPC') # ^GSPC is the Yahoo finance symbol to refer S&P 500 index
# we gather historical quotes from 2001-01-01 up to today
hist_quotes = stock.get_historical('2001-01-01', today)
# here is how a row looks like
hist_quotes[0]
{'Adj_Close': '2091.580078',
'Close': '2091.580078',
'Date': '2016-04-22',
'High': '2094.320068',
'Low': '2081.199951',
'Open': '2091.48999',
'Symbol': '%5eGSPC',
'Volume': '3790580000'}

Here, hist_quotes is a list of dictionaries, and each dictionary object is a trading day with OpenHighLowCloseAdj_closeVolumeSymbol and Date values. During each trading day, the price usually changes starting from the opening price Open to the closing price Close, and hitting a maximum and a minimum valueHigh and Low. We need to read through it and create lists of each of the most relevant data. Also, data must be ordered by the most recent values at first, so we need to reverse it:

l_date = []
l_open = []
l_high = []
l_low = []
l_close = []
l_volume = []
# reverse the list
hist_quotes.reverse()
for quotes in hist_quotes:
l_date.append(quotes['Date'])
l_open.append(float(quotes['Open']))
l_high.append(float(quotes['High']))
l_low.append(float(quotes['Low']))
l_close.append(float(quotes['Close']))
l_volume.append(int(quotes['Volume']))

We can pack all downloaded quotes into an SFrame object, which is a highly scalable column based data frame, and it is compressed. One of the advantages is that it can also be larger than the amount of RAM because it is disk-backed. You can check the documentation to learn more about SFrame.

So, let’s store and then check the historical data:

qq = gl.SFrame({'datetime' : l_date, 
'open' : l_open, 
'high' : l_high, 
'low' : l_low, 
'close' : l_close, 
'volume' : l_volume})
# datetime is a string, so convert into datetime object
qq['datetime'] = qq['datetime'].apply(lambda x:datetime.strptime(x, '%Y-%m-%d'))
# just to check if data is sorted in ascending mode
qq.head(3)

close datetime high low open volume
1283.27 2001-01-02 00:00:00 1320.28 1276.05 1320.28 1129400000
1347.56 2001-01-03 00:00:00 1347.76 1274.62 1283.27 1880700000
1333.34 2001-01-04 00:00:00 1350.24 1329.14 1347.56 2131000000

Now we can save data to disk with the SFrame method save, as follows:

qq.save(“SP500_daily.bin”)
# once data is saved, we can use the following instruction to retrieve it 
qq = gl.SFrame(“SP500_daily.bin/”)

Let’s See What the S&P 500 Looks Like

To see how the loaded S&P 500 data will look like, we can use the following code:

import matplotlib.pyplot as plt
%matplotlib inline # only for those who are using IPython notebook
plt.plot(qq['close'])

The output of the code is the following graph:

Read the full article by Andrea Nalon, a Toptal freelance developer here. 



Python Best Practices and Tips by Toptal Developers

This resource contains a collection of Python best practices and Python tips provided by our Toptal network members. As such, this page will be updated on a regular basis to include additional information and cover emerging Python techniques. This is a community driven project, so you are encouraged to contribute as well, and we are counting on your feedback.

Python is a high level language used in many development areas, like web development (Django, Flask), data analysis (SciPy, scikit-learn), desktop UI (wxWidgets, PyQt) and system administration (Ansible, OpenStack). The main advantage of Python is development speed. Python comes with rich standard library, a lot of 3rd party libraries and clean syntax. All this allows a developer to focus on the problem they want to solve, and not on the language details or reinventing the wheel.

Check out the Toptal resource pages for additional information on Python. There is a Python hiring guide, Python job descriptioncommon Python mistakes, and Python interview questions.

Be Consistent About Indentation in the Same Python File.

Indentation level in Python is really important, and mixing tabs and spaces is not a smart, nor recommended practice. To go even further, Python 3 will simply refuse to interpret mixed file, while in Python 2 the interpretation of tabs is as if it is converted to spaces using 8-space tab stops. So while executing, you may have no clue at which indentation level a specific line is being considered.

For any code you think someday someone else will read or use, to avoid confusion you should stick with PEP-8, or your team-specific coding style. PEP-8 strongly discourage mixing tabs and spaces in the same file.

For further information, check out this Q&A on StackExchange:

  1. The first downside is that it quickly becomes a mess

… Formatting should be the task of the IDE. Developers have already enough work to care about the size of tabs, how much spaces will an IDE insert, etc. The code should be formatted correctly, and displayed correctly on other configurations, without forcing developers to think about it.

Also, remember this:

Furthermore, it can be a good idea to avoid tabs altogether, because the semantics of tabs are not very well-defined in the computer world, and they can be displayed completely differently on different types of systems and editors. Also, tabs often get destroyed or wrongly converted during copy-paste operations, or when a piece of source code is inserted into a web page or other kind of markup code.

 
This post originally appeared in Toptal
 


How to Create a Simple Python WebSocket Server Using Tornado

With the increase in popularity of real-time web applications, WebSockets have become a key technology in their implementation. The days where you had to constantly press the reload button to receive updates from the server are long gone. Web applications that want to provide real-time updates no longer have to poll the server for changes - instead, servers push changes down the stream as they happen. Robust web frameworks have begun supporting WebSockets out of the box. Ruby on Rails 5, for example, took it even further and added support for action cables.

In the world of Python, many popular web frameworks exist. Frameworks such as Django provide nearly everything necessary to build web applications, and anything that it lacks can be made up with one of the thousands of plugins available for Django. However, due to the way Python or most of its web frameworks work, handling long lived connections can quickly become a nightmare. The threaded model and global interpreter lock are often considered to be the achilles heel of Python.

But all of that has started to change. With certain new features of Python 3 and frameworks that already exist for Python, such as Tornado, handling long lived connections is a challenge no more. Tornado provides web server capabilities in Python that is specifically useful in handling long-lived connections.

 
Read the full article in Toptal 


Tackle The Most Complex Code First by Writing Tests That Matter

There are a lot of discussions, articles, and blogs around the topic of code quality. People say – use Test Driven techniques! Tests are a “must have” to start any refactoring! That’s all cool, but it’s 2016 and there is a massive volume of products and code bases still in production that were created ten, fifteen, or even twenty years ago. It’s no secret that a lot of them have legacy code with low test coverage.

While I’d like to be always at the leading, or even bleeding, edge of the technology world – engaged with new cool projects and technologies – unfortunately it’s not always possible and often I have to deal with old systems. I like to say that when you develop from scratch, you act as a creator, mastering new matter. But when you’re working on legacy code, you’re more like a surgeon – you know how the system works in general, but you never know for sure whether the patient will survive your “operation”. And since it’s legacy code, there are not many up to date tests for you to rely on. This means that very frequently one of the very first steps is to cover it with tests. More precisely, not merely to provide coverage, but to develop a test coverage strategy.

Coupling and Cyclomatic Complexity: Metrics for Smarter Test Coverage

Forget 100% coverage. Test smarter by identifying classes that are more likely to break.

Basically, what I needed to determine was what parts (classes / packages) of the system we needed to cover with tests in the first place, where we needed unit tests, where integration tests would be more helpful etc. There are admittedly many ways to approach this type of analysis and the one that I’ve used may not be the best, but it’s kind of an automatic approach. Once my approach is implemented, it takes minimal time to actually do the analysis itself and, what is more important, it brings some fun into legacy code analysis.

The main idea here is to analyse two metrics – coupling (i.e., afferent coupling, or CA) and complexity (i.e. cyclomatic complexity).

The first one measures how many classes use our class, so it basically tells us how close a particular class is to the heart of the system; the more classes there are that use our class, the more important it is to cover it with tests.

On the other hand, if a class is very simple (e.g. contains only constants), then even if it’s used by many other parts of the system, it’s not nearly as important to create a test for. Here is where the second metric can help. If a class contains a lot of logic, the Cyclomatic complexity will be high.

The same logic can also be applied in reverse; i.e., even if a class is not used by many classes and represents just one particular use case, it still makes sense to cover it with tests if its internal logic is complex.

There is one caveat though: let’s say we have two classes – one with the CA 100 and complexity 2 and the other one with the CA 60 and complexity 20. Even though the sum of the metrics is higher for the first one we should definitely cover the second one first. This is because the first class is being used by a lot of other classes, but is not very complex. On the other hand, the second class is also being used by a lot of other classes but is relatively more complex than the first class.

To summarize: we need to identify classes with high CA and Cyclomatic complexity. In mathematical terms, a fitness function is needed that can be used as a rating – f(CA,Complexity) – whose values increase along with CA and Complexity.

Generally speaking, the classes with the smallest differences between the two metrics should be given the highest priority for test coverage.

Finding tools to calculate CA and Complexity for the whole code base, and provide a simple way to extract this information in CSV format, proved to be a challenge. During my search, I came across two tools that are free so it would be unfair not to mention them:

A Bit Of Math

The main problem here is that we have two criteria – CA and Cyclomatic complexity – so we need to combine them and convert into one scalar value. If we had a slightly different task – e.g., to find a class with the worst combination of our criteria – we would have a classical multi-objective optimization problem:

We would need to find a point on the so called Pareto front (red in the picture above). What is interesting about the Pareto set is that every point in the set is a solution to the optimization task. Whenever we move along the red line we need to make a compromise between our criteria – if one gets better the other one gets worse. This is called Scalarization and the final result depends on how we do it.

There are a lot of techniques that we can use here. Each has its own pros and cons. However, the most popular ones are linear scalarizing and the one based on an reference point. Linear is the easiest one. Our fitness function will look like a linear combination of CA and Complexity:

f(CA, Complexity) = A×CA + B×Complexity

where A and B are some coefficients.

The point which represents a solution to our optimization problem will lie on the line (blue in the picture below). More precisely, it will be at the intersection of the blue line and red Pareto front. Our original problem is not exactly an optimization problem. Rather, we need to create a ranking function. Let’s consider two values of our ranking function, basically two values in our Rank column:

R1 = A∗CA + B∗Complexity and R2 = A∗CA + B∗Complexity

Both of the formulas written above are equations of lines, moreover these lines are parallel. Taking more rank values into consideration we’ll get more lines and therefore more points where the Pareto line intersects with the (dotted) blue lines. These points will be classes corresponding to a particular rank value.

Unfortunately, there is an issue with this approach. For any line (Rank value), we’ll have points with very small CA and very big Complexity (and visa versa) lying on it. This immediately puts points with a big difference between metric values in the top of the list which is exactly what we wanted to avoid.

The other way to do the scalarizing is based on the reference point. Reference point is a point with the maximum values of both criteria:

(max(CA), max(Complexity))

The fitness function will be the distance between the Reference point and the data points:

f(CA,Complexity) = √((CA−CA )2 + (Complexity−Complexity)2)

We can think about this fitness function as a circle with the center at the reference point. The radius in this case is the value of the Rank. The solution to the optimization problem will be the point where the circle touches the Pareto front. The solution to the original problem will be sets of points corresponding to the different circle radii as shown in the following picture (parts of circles for different ranks are shown as dotted blue curves):

This approach deals better with extreme values but there are still two issues: First – I’d like to have more points near the reference points to better overcome the problem that we’ve faced with linear combination. Second – CA and Cyclomatic complexity are inherently different and have different values set, so we need to normalize them (e.g. so that all the values of both metrics would be from 1 to 100).

Here is a small trick that we can apply to solve the first issue – instead of looking at the CA and Cyclomatic Complexity, we can look at their inverted values. The reference point in this case will be (0,0). To solve the second issue, we can just normalize metrics using minimum value. Here is how it looks:

Inverted and normalized complexity – NormComplexity:

(1 + min(Complexity)) / (1 + Complexity)∗100

Inverted and normalized CA – NormCA:

(1 + min(CA)) / (1+CA)∗100

Note: I added 1 to make sure that there is no division by 0.

The following picture shows a plot with the inverted values:

Final Ranking

We are now coming to the last step – calculating the rank. As mentioned, I’m using the reference point method, so the only thing that we need to do is to calculate the length of the vector, normalize it, and make it ascend with the importance of a unit test creation for a class. Here is the final formula:

Rank(NormComplexity , NormCA) = 100 − √(NormComplexity2 + NormCA2) / √2

More Statistics

There is one more thought that I’d like to add, but let’s first have a look at some statistics. Here is a histogram of the Coupling metrics:

What is interesting about this picture is the number of classes with low CA (0-2). Classes with CA 0 are either not used at all or are top level services. These represent API endpoints, so it’s fine that we have a lot of them. But classes with CA 1 are the ones that are directly used by the endpoints and we have more of these classes than endpoints. What does this mean from architecture / design perspective?

In general, it means that we have a kind of script oriented approach – we script every business case separately (we can’t really reuse the code as business cases are too diverse). If that is the case, then it’s definitely a code smell and we need to do refactoring. Otherwise, it means the cohesion of our system is low, in which case we also need refactoring, but architectural refactoring this time.

Additional useful information we can get from the histogram above is that we can completely filter out classes with low coupling (CA in {0,1}) from the list of the classes eligible for coverage with unit tests. The same classes, though, are good candidates for the integration / functional tests.

You can find all the scripts and resources that I have used in this GitHub repository: ashalitkin/code-base-stats.

Does It Always Work?

Not necessarily. First of all it’s all about static analysis, not runtime. If a class is linked from many other classes it can be a sign that it’s heavily used, but it’s not always true. For example, we don’t know whether the functionality is really heavily used by end users. Second, if the design and the quality of the system is good enough, then most likely different parts / layers of it are decoupled via interfaces so static analysis of the CA will not give us a true picture. I guess it’s one of the main reasons why CA is not that popular in tools like Sonar. Fortunately, it’s totally fine for us since, if you remember, we are interested in applying this specifically to old ugly code bases.

In general, I’d say that runtime analysis would give much better results, but unfortunately it’s much more costly, time consuming, and complex, so our approach is a potentially useful and lower cost alternative.

This article was written by Andrey Shalitkin, a Toptal Java developer.



How to Create a Simple Python WebSocket Server Using Tornado

With the increase in popularity of real-time web applications, WebSockets have become a key technology in their implementation. The days where you had to constantly press the reload button to receive updates from the server are long gone. Web applications that want to provide real-time updates no longer have to poll the server for changes - instead, servers push changes down the stream as they happen. Robust web frameworks have begun supporting WebSockets out of the box. Ruby on Rails 5, for example, took it even further and added support for action cables.

In the world of Python, many popular web frameworks exist. Frameworks such as Django provide nearly everything necessary to build web applications, and anything that it lacks can be made up with one of the thousands of plugins available for Django. However, due to the way Python or most of its web frameworks work, handling long lived connections can quickly become a nightmare. The threaded model and global interpreter lock are often considered to be the achilles heel of Python.

But all of that has started to change. With certain new features of Python 3 and frameworks that already exist for Python, such as Tornado, handling long lived connections is a challenge no more. Tornado provides web server capabilities in Python that is specifically useful in handling long-lived connections.

How to Create a Simple Python WebSocket Server using Tornado

In this article, we will take a look at how a simple WebSocket server can be built in Python using Tornado. The demo application will allow us to upload a tab-separated values (TSV) file, parse it and make its contents available at a unique URL.

Tornado and WebSockets

Tornado is an asynchronous network library and specializes in dealing with event driven networking. Since it can naturally hold tens of thousands of open connections concurrently, a server can take advantage of this and handle a lot of WebSocket connections within a single node. WebSocket is a protocol that provides full-duplex communication channels over a single TCP connection. As it is an open socket, this technique makes a web connection stateful and facilitates real-time data transfer to and from the server. The server, keeping the states of the clients, makes it easy to implement real-time chat applications or web games based on WebSockets.

WebSockets are designed to be implemented in web browsers and servers, and is currently supported in all of the major web browsers. A connection is opened once and messages can travel back and forth multiple times before the connection is closed.

Installing Tornado is rather simple. It is listed in PyPI and can be installed using pip or easy_install:

pip install tornado

Tornado comes with its own implementation of WebSockets. For the purposes of this article, this is pretty much all we will need.

WebSockets in Action

One of the advantages of using WebSocket is its stateful property. This changes the way we typically think of client-server communication. One particular use case of this is where the server is required to perform long slow processes and gradually stream results back to the client.

In our example application, the user will be able to upload a file through WebSocket. For the entire lifetime of the connection, the server will retain the parsed file in-memory. Upon requests, the server can then send back parts of the file to the front-end. Furthermore, the file will be made available at a URL which can then be viewed by multiple users. If another file is uploaded at the same URL, everyone looking at it will be able to see the new file immediately.

For front-end, we will use AngularJS. This framework and libraries will allow us to easily handle file uploads and pagination. For everything related to WebSockets, however, we will use standard JavaScript functions.

This simple application will be broken down into three separate files:

  • parser.py: where our Tornado server with the request handlers is implemented
  • templates/index.html: front-end HTML template
  • static/parser.js: For our front-end JavaScript

Opening a WebSocket

From the front-end, a WebSocket connection can be established by instantiating a WebSocket object:

new WebSocket(WEBSOCKET_URL);

This is something we will have to do on page load. Once a WebSocket object is instantiated, handlers must be attached to handle three important events:

  • open: fired when a connection is established
  • message: fired when a message is received from the server
  • close: fired when a connection is closed
$scope.init = function() {
$scope.ws = new WebSocket('ws://' + location.host + '/parser/ws');
$scope.ws.binaryType = 'arraybuffer';
$scope.ws.onopen = function() {

console.log('Connected.')

};

$scope.ws.onmessage = function(evt) {
$scope.$apply(function () {
message = JSON.parse(evt.data);
$scope.currentPage = parseInt(message['page_no']);
$scope.totalRows = parseInt(message['total_number']);
$scope.rows = message['data'];
});
};
ope.ws.onclose = function() {
$s
c
console.log('Connection is closed...');

};

}
pe.init();
$sc
o

Since these event handlers will not automatically trigger AngularJS’s $scope lifecycle, the contents of the handler function needs to be wrapped in $apply. In case you are interested, AngularJS specific packages exist that make it easier to integrate WebSocket in AngularJS applications.

It’s worth mentioning that dropped WebSocket connections are not automatically reestablished, and will require the application to attempt reconnects when the close event handler is triggered. This is a bit beyond the scope of this article.

Selecting a File to Upload

Since we are building a single-page application using AngularJS, attempting to submit forms with files the age-old way will not work. To make things easier, we will use Danial Farid’s ng-file-upload library. Using which, all we need to do to allow a user to upload a file is add a button to our front-end template with specific AngularJS directives:

<button class="btn btn-default" type="file" ngf-select="uploadFile($file, $invalidFiles)"
accept=".tsv" ngf-max-size="10MB">Select File</button>

The library, among many things, allows us to set acceptable file extension and size. Clicking on this button, just like any <input type=”file”> element, will open the standard file picker.

Uploading the File

When you want to transfer binary data, you can choose among array buffer and blob. If it is just raw data like an image file, choose blob and handle it properly in server. Array buffer is for fixed-length binary buffer and a text file like TSV can be transferred in the format of byte string. This code snippet shows how to upload a file in array buffer format.

$scope.uploadFile = function(file, errFiles) {
ws = $scope.ws;
$scope.f = file;
$scope.errFile = errFiles && errFiles[0];
if (file) {
reader = new FileReader();
rawData = new ArrayBuffer();
reader.onload = function(evt) {

rawData = evt.target.result;

ws.send(rawData); }
} }
reader.readAsArrayBuffer(file);

The ng-file-upload directive provides an uploadFile function. Here you can transform the file into an array buffer using a FileReader, and send it through the WebSocket.

Note that sending large files over WebSocket by reading them into array buffers may not be the most optimum way to upload them as it can quickly occupy to much memory resulting in a poor experience.

Receive the File on the Server

Tornado determines the message type using the 4bit opcode, and returns str for binary data and unicode for text.

if opcode == 0x1:
# UTF-8 data
_bytes_in += len(data) try:
self._messag
e decoded = data.decode("utf-8")
self._abort() return
except UnicodeDecodeError:
self._run_callback(self.handler.on_message, decoded)
_bytes_in += len(da
elif opcode == 0x2: # Binary data self._messag
eta)
elf._run_callback(self.handler.on_message, data)

s

In Tornado web server, array buffer is received in type of str.

In this example the type of content we expect is TSV, so the file is parsed and transformed into a dictionary. Of course, in real applications, there are saner ways of dealing with arbitrary uploads.

def make_message(self, page_no=1):
page_size = 100 return {
number": len(self.r
"page_no": page_no, "total
_ows),
self.rows[page_size * (page_no - 1):page_size * page_no] } def
"data"
: on_message(self, message): if isinstance(message, str):
for line in (x.strip() for x in message.
self.rows = [csv.reader([line], delimiter="\t").next(
)splitlines()) if line]
e(self.make_message())
self.write_messa
g

Request a Page

Since our goal is to show uploaded TSV data in chunks of small pages, we need a means of requesting a particular page. To keep things simple, we will simply use the same WebSocket connection to send the page number to our server.

$scope.pageChanged = function() {
ws = $scope.ws;
ws.send($scope.currentPage);
}

The server will receive this message as unicode:

def on_message(self, message):
if isinstance(message, unicode):
page_no = int(message)
.make_message(page_no))
self.write_message(sel
f

Attempting to respond with a dict from a Tornado WebSocket server will automatically encode it in JSON format. So it’s completely okay to just to send a dict which contains 100 rows of content.

Sharing Access with Others

To be able to share access to the same upload with multiple users, we need to be able to uniquely identify the uploads. Whenever a user connects to the server over WebSocket, a random UUID will be generated and assigned to their connection.

def open(self, doc_uuid=None):
if doc_uuid is None:
uid.uuid4())
self.uuid = str(
u

uuid.uuid4() generates a random UUID and str() converts a UUID to a string of hex digits in standard form.

If another user with a UUID connects to the server, the corresponding instance of FileHandler is added to a dictionary with the UUID as the key and is removed when the connection is closed.

@classmethod
@tornado.gen.coroutine
def add_clients(cls, doc_uuid, client):
with (yield lock.acquire()):
s: clients_with_uuid =
if doc_uuid in cls.clien
tFileHandler.clients[doc_uuid]
pend(client) else: FileHa
clients_with_uuid.a
pndler.clients[doc_uuid] = [client] @classmethod @tornado.gen.coroutine
: if doc_uuid in cls.clients:
def remove_clients(cls, doc_uuid, client): with (yield lock.acquire()
) clients_with_uuid = FileHandler.clients[doc_uuid] clients_with_uuid.remove(client)
if len(clients_with_uuid) == 0:
del cls.clients[doc_uuid]

The clients dictionary may throw a KeyError when adding or removing clients simultaneously. As Tornado is an asynchronous networking library, it provides locking mechanisms for synchronization. A simple lock with coroutine fits this case of handling clients dictionary.

If any user uploads a file or move between pages, all the users with the same UUID view the same page.

@classmethod
def send_messages(cls, doc_uuid):
clients_with_uuid = cls.clients[doc_uuid]
message = cls.make_message(doc_uuid)
try: client.write_mess
for client in clients_with_uuid:
age(message) except:
or sending message", exc_info=True)
logging.error("Er
r

Running Behind Nginx

Implementing WebSockets is very simple, but there are some tricky things to consider when using it in production environments. Tornado is a web server, so it can get users’ requests directly, but deploying it behind Nginx may be a better choice for many reasons. However, it takes ever so slightly more effort to be able to use WebSockets through Nginx:

http {
upstream parser {
server 127.0.0.1:8080;
} server {
/parser/ws { proxy_p
location ^~
ass http://parser;
tp_version 1.1; proxy_s
proxy_h
tet_header Upgrade $http_upgrade;
ction "upgrade"; } } }
proxy_set_header Conn
e

The two proxy_set_header directives make Nginx pass the necessary headers to the back-end servers which are necessary for upgrading the connection to WebSocket.

What’s Next?

In this article, we implemented a simple Python web application that uses WebSockets to maintain persistent connections between the server and each of the clients. With modern asynchronous networking frameworks like Tornado, holding tens of thousands of open connections concurrently in Python is entirely feasible.

Although certain implementation aspects of this demo application could have been done differently, I hope it still helped demonstrate the usage of WebSockets in https://www.toptal.com/tornado framework. Source code of the demo application is available on GitHub

Originally appeared in Toptal Engineering blog



Why Are There So Many Pythons? A Python Implementation Comparison

Python is amazing.

Surprisingly, that’s a fairly ambiguous statement. What do I mean by ‘Python’? Do I mean Python the abstractinterface? Do I mean CPython, the common Python implementation (and not to be confused with the similarly named Cython)? Or do I mean something else entirely? Maybe I’m obliquely referring to Jython, or IronPython, or PyPy. Or maybe I’ve really gone off the deep end and I’m talking about RPython or RubyPython (which are very, very different things).

While the technologies mentioned above are commonly-named and commonly-referenced, some of them serve completely different purposes (or, at least, operate in completely different ways).

Throughout my time working with the Python interfaces, I’ve run across tons of these .*ython tools. But not until recently did I take the time to understand what they are, how they work, and why they’re necessary (in their own ways).

In this tutorial, I’ll start from scratch and move through the various Python implementations, concluding with a thorough introduction to PyPy, which I believe is the future of the language.

It all starts with an understanding of what ‘Python’ actually is.

If you have a good understanding for machine code, virtual machines, and the like, feel free to skip ahead.

“Is Python interpreted or compiled?”

This is a common point of confusion for Python beginners.

The first thing to realize when making a comparison is that ‘Python’ is an interface. There’s a specification of what Python should do and how it should behave (as with any interface). And there are multipleimplementations (as with any interface).

The second thing to realize is that ‘interpreted’ and ‘compiled’ are properties of an implementation, not aninterface.

So the question itself isn’t really well-formed.

Is Python interpreted or compiled? The question isn't really well-formed.

That said, for the most common Python implementation (CPython: written in C, often referred to as simply ‘Python’, and surely what you’re using if you have no idea what I’m talking about), the answer is: interpreted, with some compilation. CPython compiles* Python source code to bytecode, and then interprets this bytecode, executing it as it goes.

Note: this isn’t ‘compilation’ in the traditional sense of the word. Typically, we’d say that ‘compilation’ is taking a high-level language and converting it to machine code. But it is a ‘compilation’ of sorts.

Let’s look at that answer more closely, as it will help us understand some of the concepts that come up later in the post.

Bytecode vs. Machine Code

It’s very important to understand the difference between bytecode vs. machine code (aka native code), perhaps best illustrated by example:

  • C compiles to machine code, which is then run directly on your processor. Each instruction instructs your CPU to move stuff around.
  • Java compiles to bytecode, which is then run on the Java Virtual Machine (JVM), an abstraction of a computer that executes programs. Each instruction is then handled by the JVM, which interacts with your computer.

In very brief terms: machine code is much faster, but bytecode is more portable and secure.

Machine code looks different depending on your machine, but bytecode looks the same on all machines. One might say that machine code is optimized to your setup.

Returning to CPython implementation, the toolchain process is as follows:

  1. CPython compiles your Python source code into bytecode.
  2. That bytecode is then executed on the CPython Virtual Machine.
Beginners often assume Python is compiled because of .pyc files. There's some truth to that: the .pyc file is the compiled bytecode, which is then interpreted. So if you've run your Python code before and have the .pyc file handy, it will run faster the second time, as it doesn't have to re-compile the bytecode.

Alternative VMs: Jython, IronPython, and More

As I mentioned earlier, Python has several implementations. Again, as mentioned earlier, the most common is CPython, but there are others that should be mentioned for the sake of this comparison guide. This a Python implementation written in C and considered the ‘default’ implementation.

But what about the alternative Python implementations? One of the more prominent is Jython, a Python implementation written Java that utilizes the JVM. While CPython produces bytecode to run on the CPython VM, Jython produces Java bytecode to run on the JVM (this is the same stuff that’s produced when you compile a Java program).

Jython's use of Java bytecode is depicted in this Python implementation diagram.

“Why would you ever use an alternative implementation?”, you might ask. Well, for one, these different Python implementations play nicely with different technology stacks.

CPython makes it very easy to write C-extensions for your Python code because in the end it is executed by a C interpreter. Jython, on the other hand, makes it very easy to work with other Java programs: you can importany Java classes with no additional effort, summoning up and utilizing your Java classes from within your Jython programs. (Aside: if you haven’t thought about it closely, this is actually nuts. We’re at the point where you can mix and mash different languages and compile them all down to the same substance. (As mentioned by Rostin, programs that mix Fortran and C code have been around for a while. So, of course, this isn’t necessarily new. But it’s still cool.))

As an example, this is valid Jython code:

[Java HotSpot(TM) 64-Bit Server VM (Apple Inc.)] on java1.6.0_51
>>> from java.util import HashSet
>>> s = HashSet(5)
>>> s.add("Foo")
>>> s.add("Bar")
>>> s
[Foo, Bar]

IronPython is another popular Python implementation, written entirely in C# and targeting the .NET stack. In particular, it runs on what you might call the .NET Virtual Machine, Microsoft’s Common Language Runtime (CLR), comparable to the JVM.

You might say that Jython : Java :: IronPython : C#. They run on the same respective VMs, you can import C# classes from your IronPython code and Java classes from your Jython code, etc.

It’s totally possible to survive without ever touching a non-CPython Python implementation. But there are advantages to be had from switching, most of which are dependent on your technology stack. Using a lot of JVM-based languages? Jython might be for you. All about the .NET stack? Maybe you should try IronPython (and maybe you already have).

This Python comparison chart demonstrates the differences between Python implementations.

By the way: while this wouldn’t be a reason to use a different implementation, note that these implementations do actually differ in behavior beyond how they treat your Python source code. However, these differences are typically minor, and dissolve or emerge over time as these implementations are under active development. For example, IronPython uses Unicode strings by default; CPython, however, defaults to ASCII for versions 2.x (failing with a UnicodeEncodeError for non-ASCII characters), but does support Unicode strings by default for 3.x.

Just-in-Time Compilation: PyPy, and the Future

So we have a Python implementation written in C, one in Java, and one in C#. The next logical step: a Python implementation written in… Python. (The educated reader will note that this is slightly misleading.)

Here’s where things might get confusing. First, lets discuss just-in-time (JIT) compilation.

JIT: The Why and How

Recall that native machine code is much faster than bytecode. Well, what if we could compile some of our bytecode and then run it as native code? We’d have to pay some price to compile the bytecode (i.e., time), but if the end result was faster, that’d be great! This is the motivation of JIT compilation, a hybrid technique that mixes the benefits of interpreters and compilers. In basic terms, JIT wants to utilize compilation to speed up an interpreted system.

For example, a common approach taken by JITs:

  1. Identify bytecode that is executed frequently.
  2. Compile it down to native machine code.
  3. Cache the result.
  4. Whenever the same bytecode is set to be run, instead grab the pre-compiled machine code and reap the benefits (i.e., speed boosts).

This is what PyPy implementation is all about: bringing JIT to Python (see the Appendix for previous efforts). There are, of course, other goals: PyPy aims to be cross-platform, memory-light, and stackless-supportive. But JIT is really its selling point. As an average over a bunch of time tests, it’s said to improve performance by a factor of 6.27. For a breakdown, see this chart from the PyPy Speed Center:

Bringing JIT to Python interface using PyPy implementation pays off in performance improvements.

PyPy is Hard to Understand

PyPy has huge potential, and at this point it’s highly compatible with CPython (so it can run Flask, Django, etc.).

But there’s a lot of confusion around PyPy (see, for example, this nonsensical proposal to create a PyPyPy…). In my opinion, that’s primarily because PyPy is actually two things:

  1. A Python interpreter written in RPython (not Python (I lied before)). RPython is a subset of Python with static typing. In Python, it’s “mostly impossible” to reason rigorously about types (Why is it so hard? Well consider the fact that:

    	 x = random.choice([1, "foo"])
    	
    	

    would be valid Python code (credit to Ademan). What is the type of x? How can we reason about types of variables when the types aren’t even strictly enforced?). With RPython, you sacrifice some flexibility, but instead make it much, much easier to reason about memory management and whatnot, which allows for optimizations.

  2. A compiler that compiles RPython code for various targets and adds in JIT. The default platform is C, i.e., an RPython-to-C compiler, but you could also target the JVM and others.

Solely for clarity in this Python comparison guide, I’ll refer to these as PyPy (1) and PyPy (2).

Why would you need these two things, and why under the same roof? Think of it this way: PyPy (1) is an interpreter written in RPython. So it takes in the user’s Python code and compiles it down to bytecode. But the interpreter itself (written in RPython) must be interpreted by another Python implementation in order to run, right?

Well, we could just use CPython to run the interpreter. But that wouldn’t be very fast.

Instead, the idea is that we use PyPy (2) (referred to as the RPython Toolchain) to compile PyPy’s interpreter down to code for another platform (e.g., C, JVM, or CLI) to run on our machine, adding in JIT as well. It’s magical: PyPy dynamically adds JIT to an interpreter, generating its own compiler! (Again, this is nuts: we’re compiling an interpreter, adding in another separate, standalone compiler.)

In the end, the result is a standalone executable that interprets Python source code and exploits JIT optimizations. Which is just what we wanted! It’s a mouthful, but maybe this diagram will help:

This diagram illustrates the beauty of the PyPy implementation, including an interpreter, compiler, and an executable with JIT.

To reiterate, the real beauty of PyPy is that we could write ourselves a bunch of different Python interpreters in RPython without worrying about JIT. PyPy would then implement JIT for us using the RPython Toolchain/PyPy (2).

In fact, if we get even more abstract, you could theoretically write an interpreter for any language, feed it to PyPy, and get a JIT for that language. This is because PyPy focuses on optimizing the actual interpreter, rather than the details of the language it’s interpreting.

You could theoretically write an interpreter for any language, feed it to PyPy, and get a JIT for that language.

As a brief digression, I’d like to mention that the JIT itself is absolutely fascinating. It uses a technique called tracing, which executes as follows:

  1. Run the interpreter and interpret everything (adding in no JIT).
  2. Do some light profiling of the interpreted code.
  3. Identify operations you’ve performed before.
  4. Compile these bits of code down to machine code.

For more, this paper is highly accessible and very interesting.

To wrap up: we use PyPy’s RPython-to-C (or other target platform) compiler to compile PyPy’s RPython-implemented interpreter.

Wrapping Up

After a lengthy comparison of Python implementations, I have to ask myself: Why is this so great? Why is this crazy idea worth pursuing? I think Alex Gaynor put it well on his blog: “[PyPy is the future] because [it] offers better speed, more flexibility, and is a better platform for Python’s growth.”

In short:

  • It’s fast because it compiles source code to native code (using JIT).
  • It’s flexible because it adds the JIT to your interpreter with very little additional work.
  • It’s flexible (again) because you can write your interpreters in RPython, which is easier to extend than, say, C (in fact, it’s so easy that there’s a tutorial for writing your own interpreters).

Appendix: Other Python Names You May Have Heard

  • Python 3000 (Py3k): an alternative naming for Python 3.0, a major, backwards-incompatible Python release that hit the stage in 2008. The Py3k team predicted that it would take about five years for this new version to be fully adopted. And while most (warning: anecdotal claim) Python developers continue to use Python 2.x, people are increasingly conscious of Py3k.

  • Cython: a superset of Python that includes bindings to call C functions.
    • Goal: allow you to write C extensions for your Python code.
    • Also lets you add static typing to your existing Python code, allowing it to be compiled and reach C-like performance.
    • This is similar to PyPy, but not the same. In this case, you’re enforcing typing in the user’s code before passing it to a compiler. With PyPy, you write plain old Python, and the compiler handles any optimizations.

  • Numba: a “just-in-time specializing compiler” that adds JIT to annotated Python code. In the most basic terms, you give it some hints, and it speeds up portions of your code. Numba comes as part of theAnaconda distribution, a set of packages for data analysis and management.

  • IPython: very different from anything else discussed. A computing environment for Python. Interactive with support for GUI toolkits and browser experience, etc.

  • Psyco: a Python extension module, and one of the early Python JIT efforts. However, it’s since been marked as “unmaintained and dead”. In fact, the lead developer of Psyco, Armin Rigo, now works on PyPy.

Python Language Bindings

  • RubyPython: a bridge between the Ruby and Python VMs. Allows you to embed Python code into your Ruby code. You define where the Python starts and stops, and RubyPython marshals the data between the VMs.

  • PyObjc: language-bindings between Python and Objective-C, acting as a bridge between them. Practically, that means you can utilize Objective-C libraries (including everything you need to create OS X applications) from your Python code, and Python modules from your Objective-C code. In this case, it’s convenient that CPython is written in C, which is a subset of Objective-C.

  • PyQt: while PyObjc gives you binding for the OS X GUI components, PyQt does the same for the Qt application framework, letting you create rich graphic interfaces, access SQL databases, etc. Another tool aimed at bringing Python’s simplicity to other frameworks.

JavaScript Frameworks

  • pyjs (Pyjamas): a framework for creating web and desktop applications in Python. Includes a Python-to-JavaScript compiler, a widget set, and some more tools.

  • Brython: a Python VM written in JavaScript to allow for Py3k code to be executed in the browser.

This article was written by  CHARLES MARSHToptal's Head of Community.


Scaling Scala: How to Dockerize Using Kubernetes

Kubernetes is the new kid on the block, promising to help deploy applications into the cloud and scale them more quickly. Today, when developing for a microservices architecture, it’s pretty standard to choose Scala for creating API servers.

Microservices are replacing classic monolithic back-end servers with multiple independent services that communicate among themselves and have their own processes and resources.

If there is a Scala application in your plans and you want to scale it into a cloud, then you are at the right place. In this article I am going to show step-by-step how to take a generic Scala application and implement Kubernetes with Docker to launch multiple instances of the application. The final result will be a single application deployed as multiple instances, and load balanced by Kubernetes.

All of this will be implemented by simply importing the Kubernetes source kit in your Scala application. Please note, the kit hides a lot of complicated details related to installation and configuration, but it is small enough to be readable and easy to understand if you want to analyze what it does. For simplicity, we will deploy everything on your local machine. However, the same configuration is suitable for a real-world cloud deployment of Kubernetes.

Scale Your Scala Application with KubernetesBe smart and sleep tight, scale your Docker with Kubernetes.

What is Kubernetes

Before going into the gory details of the implementation, let’s discuss what Kubernetes is and why it’s important.

You may have already heard of Docker. In a sense, it is a lightweight virtual machine.

Docker gives the advantage of deploying each server in an isolated environment, very similar to a stand-alone virtual machine, without the complexity of managing a full-fledged virtual machine.

For these reasons, it is already one of the more widely used tools for deploying applications in clouds. A Docker image is pretty easy and fast to build and duplicable, much easier than a traditional virtual machine like VMWare, VirtualBox, or XEN.

Kubernetes complements Docker, offering a complete environment for managing dockerized applications. By using Kubernetes, you can easily deploy, configure, orchestrate, manage, and monitor hundreds or even thousands of Docker applications.

Kubernetes is an open source tool developed by Google and has been adopted by many other vendors. Kubernetes is available natively on the Google cloud platform, but other vendors have adopted it for their OpenShift cloud services too. It can be found on Amazon AWS, Microsoft Azure, RedHat OpenShift, and even more cloud technologies. We can say it is well positioned to become a standard for deploying cloud applications.

Prerequisites

Now that we covered the basics, let’s check if you have all the prerequisite software installed. First of all, you need Docker. If you are using either Windows or Mac, you need the Docker Toolbox. If you are using Linux, you need to install the particular package provided by your distribution or simply follow the official directions.

We are going to code in Scala, which is a JVM language. You need, of course, the Java Development Kit and the scala SBT tool installed and available in the global path. If you are already a Scala programmer, chances are you have those tools already installed.

If you are using Windows or Mac, Docker will by default create a virtual machine named default with only 1GB of memory, which can be too small for running Kubernetes. In my experience, I had issues with the default settings. I recommend that you open the VirtualBox GUI, select your virtual machine default, and change the memory to at least to 2048MB.

VirtualBox memory settings

The Application to Clusterize

The instructions in this tutorial can apply to any Scala application or project. For this article to have some “meat” to work on, I chose an example used very often to demonstrate a simple REST microservice in Scala, called Akka HTTP. I recommend you try to apply source kit to the suggested example before attempting to use it on your application. I have tested the kit against the demo application, but I cannot guarantee that there will be no conflicts with your code.

So first, we start by cloning the demo application:

git clone https://github.com/theiterators/akka-http-microservice

Next, test if everything works correctly:

cd akka-http-microservice
sbt run

Then, access to http://localhost:9000/ip/8.8.8.8, and you should see something like in the following image:

Akka HTTP microservice is running

Adding the Source Kit

Now, we can add the source kit with some Git magic:

git remote add ScalaGoodies https://github.com/sciabarra/ScalaGoodies
git fetch --all
git merge ScalaGoodies/kubernetes

With that, you have the demo including the source kit, and you are ready to try. Or you can even copy and paste the code from there into your application.

Once you have merged or copied the files in your projects, you are ready to start.

Starting Kubernetes

Once you have downloaded the kit, we need to download the necessary kubectl binary, by running:

bin/install.sh

This installer is smart enough (hopefully) to download the correct kubectl binary for OSX, Linux, or Windows, depending on your system. Note, the installer worked on the systems I own. Please do report any issues, so that I can fix the kit.

Once you have installed the kubectl binary, you can start the whole Kubernetes in your local Docker. Just run:

bin/start-local-kube.sh

The first time it is run, this command will download the images of the whole Kubernetes stack, and a local registry needed to store your images. It can take some time, so please be patient. Also note, it needs direct accesses to the internet. If you are behind a proxy, it will be a problem as the kit does not support proxies. To solve it, you have to configure the tools like Docker, curl, and so on to use the proxy. It is complicated enough that I recommend getting a temporary unrestricted access.

Assuming you were able to download everything successfully, to check if Kubernetes is running fine, you can type the following command:

bin/kubectl get nodes

The expected answer is:

NAME        STATUS    AGE
127.0.0.1   Ready     2m

Note that age may vary, of course. Also, since starting Kubernetes can take some time, you may have to invoke the command a couple of times before you see the answer. If you do not get errors here, congratulations, you have Kubernetes up and running on your local machine.

Dockerizing Your Scala App

Now that you have Kubernetes up and running, you can deploy your application in it. In the old days, before Docker, you had to deploy an entire server for running your application. With Kubernetes, all you need to do to deploy your application is:

  • Create a Docker image.
  • Push it in a registry from where it can be launched.
  • Launch the instance with Kubernetes, that will take the image from the registry.

Luckily, it is way less complicated that it looks, especially if you are using the SBT build tool like many do.

In the kit, I included two files containing all the necessary definitions to create an image able to run Scala applications, or at least what is needed to run the Akka HTTP demo. I cannot guarantee that it will work with any other Scala applications, but it is a good starting point, and should work for many different configurations. The files to look for building the Docker image are:

docker.sbt
project/docker.sbt

Let’s have a look at what’s in them. The file project/docker.sbt contains the command to import the sbt-docker plugin:

addSbtPlugin("se.marcuslonnberg" % "sbt-docker" % "1.4.0")

This plugin manages the building of the Docker image with SBT for you. The Docker definition is in the docker.sbt file and looks like this:

imageNames in docker := Seq(ImageName("localhost:5000/akkahttp:latest"))
dockerfile in docker := {
val jarFile: File = sbt.Keys.`package`.in(Compile, packageBin).value
val classpath = (managedClasspath in Compile).value
val mainclass = mainClass.in(Compile, packageBin).value.getOrElse(sys.error("Expected exactly one main class"))
val jarTarget = s"/app/${jarFile.getName}"
val classpathString = classpath.files.map("/app/" + _.getName)
.mkString(":") + ":" + jarTarget
new Dockerfile {
from("anapsix/alpine-java:8")
add(classpath.files, "/app/")
add(jarFile, jarTarget)
entryPoint("java", "-cp", classpathString, mainclass)
}
}

To fully understand the meaning of this file, you need to know Docker well enough to understand this definition file. However, we are not going into the details of the Docker definition file, because you do not need to understand it thoroughly to build the image.

The beauty of using SBT for building the Docker image is that the SBT will take care of collecting all the files for you.

Note the classpath is automatically generated by the following command:

val classpath = (managedClasspath in Compile).value

In general, it is pretty complicated to gather all the JAR files to run an application. Using SBT, the Docker file will be generated with add(classpath.files, "/app/"). This way, SBT collects all the JAR files for you and constructs a Dockerfile to run your application.

The other commands gather the missing pieces to create a Docker image. The image will be built using an existing image APT to run Java programs (anapsix/alpine-java:8, available on the internet in the Docker Hub). Other instructions are adding the other files to run your application. Finally, by specifying an entry point, we can run it. Note also that the name starts with localhost:5000 on purpose, because localhost:5000 is where I installed the registry in the start-kube-local.sh script.

Building the Docker Image with SBT

To build the Docker image, you can ignore all the details of the Dockerfile. You just need to type:

sbt dockerBuildAndPush

The sbt-docker plugin will then build a Docker image for you, downloading from the internet all the necessary pieces, and then it will push to a Docker registry that was started before, together with the Kubernetes application in localhost. So, all you need is to wait a little bit more to have your image cooked and ready.

Note, if you experience problems, the best thing to do is to reset everything to a known state by running the following commands:

bin/stop-kube-local.sh
bin/start-kube-local.sh

Those commands should stop all the containers and restart them correctly to get your registry ready to receive the image built and pushed by sbt.

Starting the Service in Kubernetes

Now that the application is packaged in a container and pushed in a registry, we are ready to use it. Kubernetes uses the command line and configuration files to manage the cluster. Since command lines can become very long, and also be able to replicate the steps, I am using the configurations files here. All the samples in the source kit are in the folder kube.

Our next step is to launch a single instance of the image. A running image is called, in the Kubernetes language, a pod. So let’s create a pod by invoking the following command:

bin/kubectl create -f kube/akkahttp-pod.yml

You can now inspect the situation with the command:

bin/kubectl get pods

You should see:

NAME                   READY     STATUS    RESTARTS   AGE
akkahttp               1/1       Running   0          33s
k8s-etcd-127.0.0.1     1/1       Running   0          7d
k8s-master-127.0.0.1   4/4       Running   0          7d
k8s-proxy-127.0.0.1    1/1       Running   0          7d

Status actually can be different, for example, “ContainerCreating”, it can take a few seconds before it becomes “Running”. Also, you can get another status like “Error” if, for example, you forget to create the image before.

You can also check if your pod is running with the command:

bin/kubectl logs akkahttp

You should see an output ending with something like this:

[DEBUG] [05/30/2016 12:19:53.133] [default-akka.actor.default-dispatcher-5] [akka://default/system/IO-TCP/selectors/$a/0] Successfully bound to /0:0:0:0:0:0:0:0:9000

Now you have the service up and running inside the container. However, the service is not yet reachable. This behavior is part of the design of Kubernetes. Your pod is running, but you have to expose it explicitly. Otherwise, the service is meant to be internal.

Creating a Service

Creating a service and checking the result is a matter of executing:

bin/kubectl create -f kube/akkahttp-service.yaml
bin/kubectl get svc

You should see something like this:

NAME               CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
akkahttp-service   10.0.0.54                  9000/TCP   44s
kubernetes         10.0.0.1     <none>        443/TCP    3m

Note that the port can be different. Kubernetes allocated a port for the service and started it. If you are using Linux, you can directly open the browser and type http://10.0.0.54:9000/ip/8.8.8.8 to see the result. If you are using Windows or Mac with Docker Toolbox, the IP is local to the virtual machine that is running Docker, and unfortunately it is still unreachable.

I want to stress here that this is not a problem of Kubernetes, rather it is a limitation of the Docker Toolbox, which in turn depends on the constraints imposed by virtual machines like VirtualBox, which act like a computer within another computer. To overcome this limitation, we need to create a tunnel. To make things easier, I included another script which opens a tunnel on an arbitrary port to reach any service we deployed. You can type the following command:

bin/forward-kube-local.sh akkahttp-service 9000

Note that the tunnel will not run in the background, you have to keep the terminal window open as long as you need it and close when you do not need the tunnel anymore. While the tunnel is running, you can open: http://localhost:9000/ip/8.8.8.8 and finally see the application running in Kubernetes.

Final Touch: Scale

So far we have “simply” put our application in Kubernetes. While it is an exciting achievement, it does not add too much value to our deployment. We’re saved from the effort of uploading and installing on a server and configuring a proxy server for it.

Where Kubernetes shines is in scaling. You can deploy two, ten, or one hundred instances of our application by only changing the number of replicas in the configuration file. So let’s do it.

We are going to stop the single pod and start a deployment instead. So let’s execute the following commands:

bin/kubectl delete -f kube/akkahttp-pod.yml
bin/kubectl create -f kube/akkahttp-deploy.yaml

Next, check the status. Again, you may try a couple of times because the deployment can take some time to be performed:

NAME                                   READY     STATUS    RESTARTS   AGE
akkahttp-deployment-4229989632-mjp6u   1/1       Running   0          16s
akkahttp-deployment-4229989632-s822x   1/1       Running   0          16s
k8s-etcd-127.0.0.1                     1/1       Running   0          6d
k8s-master-127.0.0.1                   4/4       Running   0          6d
k8s-proxy-127.0.0.1                    1/1       Running   0          6d

Now we have two pods, not one. This is because in the configuration file I provided, there is the value replica: 2, with two different names generated by the system. I am not going into the details of the configuration files, because the scope of the article is simply an introduction for Scala programmers to jump-start into Kubernetes.

Anyhow, there are now two pods active. What is interesting is that the service is the same as before. We configured the service to load balance between all the pods labeled akkahttp. This means we do not have to redeploy the service, but we can replace the single instance with a replicated one.

We can verify this by launching the proxy again (if you are on Windows and you have closed it):

bin/forward-kube-local.sh akkahttp-service 9000

Then, we can try to open two terminal windows and see the logs for each pod. For example, in the first type:

bin/kubectl logs -f akkahttp-deployment-4229989632-mjp6u

And in the second type:

bin/kubectl logs -f akkahttp-deployment-4229989632-s822x

Read the full article on Toptal.



Clustering Algorithms: From Start To State Of The Art

It’s not a bad time to be a Data Scientist. Serious people may find interest in you if you turn the conversation towards “Big Data”, and the rest of the party crowd will be intrigued when you mention “Artificial Intelligence” and “Machine Learning”. Even Google thinks you’re not bad, and that you’re getting even better. There are a lot of ‘smart’ algorithms that help data scientists do their wizardry. It may all seem complicated, but if we understand and organize algorithms a bit, it’s not even that hard to find and apply the one that we need.

Courses on data mining or machine learning will usually start with clustering, because it is both simple and useful. It is an important part of a somewhat wider area of Unsupervised Learning, where the data we want to describe is not labeled. In most cases this is where the user did not give us much information of what is the expected output. The algorithm only has the data and it should do the best it can. In our case, it should perform clustering – separating data into groups (clusters) that contain similar data points, while the dissimilarity between groups is as high as possible. Data points can represent anything, such as our clients. Clustering can be useful if we, for example, want to group similar users and then run different marketing campaigns on each cluster.

K-Means Clustering

After the necessary introduction, Data Mining courses always continue with K-Means; an effective, widely used, all-around clustering algorithm. Before actually running it, we have to define a distance function between data points (for example, Euclidean distance if we want to cluster points in space), and we have to set the number of clusters we want (k).

The algorithm begins by selecting k points as starting centroids (‘centers’ of clusters). We can just select any k random points, or we can use some other approach, but picking random points is a good start. Then, we iteratively repeat two steps:

  1. Assignment step: each of m points from our dataset is assigned to a cluster that is represented by the closest of the k centroids. For each point, we calculate distances to each centroid, and simply pick the least distant one.

  2. Update step: for each cluster, a new centroid is calculated as the mean of all points in the cluster. From the previous step, we have a set of points which are assigned to a cluster. Now, for each such set, we calculate a mean that we declare a new centroid of the cluster.

After each iteration, the centroids are slowly moving, and the total distance from each point to its assigned centroid gets lower and lower. The two steps are alternated until convergence, meaning until there are no more changes in cluster assignment. After a number of iterations, the same set of points will be assigned to each centroid, therefore leading to the same centroids again. K-Means is guaranteed to converge to a local optimum. However, that does not necessarily have to be the best overall solution (global optimum).

The final clustering result can depend on the selection of initial centroids, so a lot of thought has been given to this problem. One simple solution is just to run K-Means a couple of times with random initial assignments. We can then select the best result by taking the one with the minimal sum of distances from each point to its cluster – the error value that we are trying to minimize in the first place.

Other approaches to selecting initial points can rely on selecting distant points. This can lead to better results, but we may have a problem with outliers, those rare alone points that are just “off” that may just be some errors. Since they are far from any meaningful cluster, each such point may end up being its own ‘cluster’. A good balance is K-Means++ variant [Arthur and Vassilvitskii, 2007], whose initialization will still pick random points, but with probability proportional to square distance from the previously assigned centroids. Points that are further away will have higher probability to be selected as starting centroids. Consequently, if there’s a group of points, the probability that a point from the group will be selected also gets higher as their probabilities add up, resolving the outlier problem we mentioned.

K-Means++ is also the default initialization for Python’s Scikit-learn K-Means implementation. If you’re using Python, this may be your library of choice. For Java, Weka library may be a good start:

Java (Weka)

// Load some data
Instances data = DataSource.read("data.arff");
// Create the model
SimpleKMeans kMeans = new SimpleKMeans();
// We want three clusters
kMeans.setNumClusters(3);
// Run K-Means
kMeans.buildClusterer(data);
// Print the centroids
Instances centroids = kMeans.getClusterCentroids();
for (Instance centroid: centroids) {
System.out.println(centroid);
}
// Print cluster membership for each instance
for (Instance point: data) {
System.out.println(point + " is in cluster " + kMeans.clusterInstance(point));
}

Python (Scikit-learn)

>>> from sklearn import cluster, datasets
>>> iris = datasets.load_iris()
>>> X_iris = iris.data
>>> y_iris = iris.target
>>> k_means = cluster.KMeans(n_clusters=3)
>>> k_means.fit(X_iris)
KMeans(copy_x=True, init='k-means++', ...
>>> print(k_means.labels_[::10])
[1 1 1 1 1 0 0 0 0 0 2 2 2 2 2]
>>> print(y_iris[::10])
[0 0 0 0 0 1 1 1 1 1 2 2 2 2 2]

In the Python example above, we used a standard example dataset ‘Iris’, which contains flower petal and sepal dimensions for three different species of iris. We clustered these into three clusters, and compared the obtained clusters to the actual species (target), to see that they match perfectly.

In this case, we knew that there were three different clusters (species), and K-Means recognized correctly which ones go together. But, how do we choose a good number of clusters k in general? These kind of questions are quite common in Machine Learning. If we request more clusters, they will be smaller, and therefore the total error (total of distances from points to their assigned clusters) will be smaller. So, is it a good idea just to set a bigger k? We may end with having k = m, that is, each point being its own centroid, with each cluster having only one point. Yes, the total error is 0, but we didn’t get a simpler description of our data, nor is it general enough to cover some new points that may appear. This is called overfitting, and we don’t want that.

A way to deal with this problem is to include some penalty for a larger number of clusters. So, we are now trying to minimize not only the error, but error + penalty. The error will just converge towards zero as we increase the number of clusters, but the penalty will grow. At some points, the gain from adding another cluster will be less than the introduced penalty, and we’ll have the optimal result. A solution that usesBayesian Information Criterion (BIC) for this purpose is called X-Means [Pelleg and Moore, 2000].

Another thing we have to define properly is the distance function. Sometimes that’s a straightforward task, a logical one given the nature of data. For points in space, Euclidean distance is an obvious solution, but it may be tricky for features of different ‘units’, for discrete variables, etc. This may require a lot of domain knowledge. Or, we can call Machine Learning for help. We can actually try to learn the distance function. If we have a training set of points that we know how they should be grouped (i.e. points labeled with their clusters), we can use supervised learning techniques to find a good function, and then apply it to our target set that is not yet clustered.

The method used in K-Means, with its two alternating steps resembles an Expectation–Maximization (EM) method. Actually, it can be considered a very simple version of EM. However, it should not be confused with the more elaborate EM clustering algorithm even though it shares some of the same principles.

EM Clustering

So, with K-Means clustering each point is assigned to just a single cluster, and a cluster is described only by its centroid. This is not too flexible, as we may have problems with clusters that are overlapping, or ones that are not of circular shape. With EM Clustering, we can now go a step further and describe each cluster by its centroid (mean), covariance (so that we can have elliptical clusters), and weight (the size of the cluster). The probability that a point belongs to a cluster is now given by a multivariate Gaussian probability distribution (multivariate - depending on multiple variables). That also means that we can calculate the probability of a point being under a Gaussian ‘bell’, i.e. the probability of a point belonging to a cluster.

We now start the EM procedure by calculating, for each point, the probabilities of it belonging to each of the current clusters (which, again, may be randomly created at the beginning). This is the E-step. If one cluster is a really good candidate for a point, it will have a probability close to one. However, two or more clusters can be acceptable candidates, so the point has a distribution of probabilities over clusters. This property of the algorithm, of points not belonging restricted to one cluster is called “soft clustering”.

The M-step now recalculates the parameters of each cluster, using the assignments of points to the previous set of clusters. To calculate the new mean, covariance and weight of a cluster, each point data is weighted by its probability of belonging to the cluster, as calculated in the previous step.

Alternating these two steps will increase the total log-likelihood until it converges. Again, the maximum may be local, so we can run the algorithm several times to get better clusters.

If we now want to determine a single cluster for each point, we may simply choose the most probable one. Having a probability model, we can also use it to generate similar data, that is to sample more points that are similar to the data that we observed.

Affinity Propagation

Affinity Propagation (AP) was published by Frey and Dueck in 2007, and is only getting more and more popular due to its simplicity, general applicability, and performance. It is changing its status from state of the art to de facto standard.

The main drawbacks of K-Means and similar algorithms are having to select the number of clusters, and choosing the initial set of points. Affinity Propagation, instead, takes as input measures of similarity between pairs of data points, and simultaneously considers all data points as potential exemplars. Real-valued messages are exchanged between data points until a high-quality set of exemplars and corresponding clusters gradually emerges.

As an input, the algorithm requires us to provide two sets of data:

  1. Similarities between data points, representing how well-suited a point is to be another one’s exemplar. If there’s no similarity between two points, as in they cannot belong to the same cluster, this similarity can be omitted or set to -Infinity depending on implementation.

  2. Preferences, representing each data point’s suitability to be an exemplar. We may have some a priori information which points could be favored for this role, and so we can represent it through preferences.

Both similarities and preferences are often represented through a single matrix, where the values on the main diagonal represent preferences. Matrix representation is good for dense datasets. Where connections between points are sparse, it is more practical not to store the whole n x n matrix in memory, but instead keep a list of similarities to connected points. Behind the scene, ‘exchanging messages between points’ is the same thing as manipulating matrices, and it’s only a matter of perspective and implementation.

The algorithm then runs through a number of iterations, until it converges. Each iteration has two message-passing steps:

  1. Calculating responsibilities: Responsibility r(i, k) reflects the accumulated evidence for how well-suited point k is to serve as the exemplar for point i, taking into account other potential exemplars for point i. Responsibility is sent from data point i to candidate exemplar point k.

  2. Calculating availabilities: Availability a(i, k) reflects the accumulated evidence for how appropriate it would be for point i to choose point k as its exemplar, taking into account the support from other points that point k should be an exemplar. Availability is sent from candidate exemplar point k to point i.

In order to calculate responsibilities, the algorithm uses original similarities and availabilities calculated in the previous iteration (initially, all availabilities are set to zero). Responsibilities are set to the input similarity between point i and point k as its exemplar, minus the largest of the similarity and availability sum between point i and other candidate exemplars. The logic behind calculating how suitable a point is for an exemplar is that it is favored more if the initial a priori preference was higher, but the responsibility gets lower when there is a similar point that considers itself a good candidate, so there is a ‘competition’ between the two until one is decided in some iteration.

Calculating availabilities, then, uses calculated responsibilities as evidence whether each candidate would make a good exemplar. Availability a(i, k) is set to the self-responsibility r(k, k) plus the sum of the positive responsibilities that candidate exemplar k receives from other points.

Finally, we can have different stopping criteria to terminate the procedure, such as when changes in values fall below some threshold, or the maximum number of iterations is reached. At any point through Affinity Propagation procedure, summing Responsibility (r) and Availability (a) matrices gives us the clustering information we need: for point i, the k with maximum r(i, k) + a(i, k) represents point i’s exemplar. Or, if we just need the set of exemplars, we can scan the main diagonal. If r(i, i) + a(i, i) > 0, point i is an exemplar.

We’ve seen that with K-Means and similar algorithms, deciding the number of clusters can be tricky. With AP, we don’t have to explicitly specify it, but it may still need some tuning if we obtain either more or less clusters than we may find optimal. Luckily, just by adjusting the preferences we can lower or raise the number of clusters. Setting preferences to a higher value will lead to more clusters, as each point is ‘more certain’ of its suitability to be an exemplar and is therefore harder to ‘beat’ and include it under some other point’s ‘domination’. Conversely, setting lower preferences will result in having less clusters; as if they’re saying “no, no, please, you’re a better exemplar, I’ll join your cluster”. As a general rule, we may set all preferences to the median similarity for a medium to large number of clusters, or to the lowest similarity for a moderate number of clusters. However, a couple of runs with adjusting preferences may be needed to get the result that exactly suits our needs.

Hierarchical Affinity Propagation is also worth mentioning, as a variant of the algorithm that deals with quadratic complexity by splitting the dataset into a couple of subsets, clustering them separately, and then performing the second level of clustering.

In The End…

There’s a whole range of clustering algorithms, each one with its pros and cons regarding what type of data they with, time complexity, weaknesses, and so on. To mention more algorithms, for example there’s Hierarchical Agglomerative Clustering (or Linkage Clustering), good for when we don’t necessarily have circular (or hyper-spherical) clusters, and don’t know the number of clusters in advance. It starts with each point being a separate cluster, and works by joining two closest clusters in each step until everything is in one big cluster.

With Hierarchical Agglomerative Clustering, we can easily decide the number of clusters afterwards by cutting the dendrogram (tree diagram) horizontally where we find suitable. It is also repeatable (always gives the same answer for the same dataset), but is also of a higher complexity (quadratic).

Then, DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is also an algorithm worth mentioning. It groups points that are closely packed together, expanding clusters in any direction where there are nearby points, thus dealing with different shapes of clusters.

These algorithms deserve an article of their own, and we can explore them in a separate post later on.

It takes experience with some trial and error to know when to use one algorithm or the other. Luckily, we have a range of implementations in different programming languages, so trying them out only requires a little willingness to play.

 This article was written by Lovro Iliassich, a Toptal Java developer. 



Python Class Attributes

I had a programming interview recently, a phone-screen in which we used a collaborative text editor.

I was asked to implement a certain API, and chose to do so in Python. Abstracting away the problem statement, let’s say I needed a class whose instances stored some data and some other_data.

I took a deep breath and started typing. After a few lines, I had something like this:

class Service(object):
data = []
def __init__(self, other_data):
self.other_data = other_data
...

My interviewer stopped me:

  • Interviewer: “That line: data = []. I don’t think that’s valid Python?”
  • Me: “I’m pretty sure it is. It’s just setting a default value for the instance attribute.”
  • Interviewer: “When does that code get executed?”
  • Me: “I’m not really sure. I’ll just fix it up to avoid confusion.”

For reference, and to give you an idea of what I was going for, here’s how I amended the code:

class Service(object):
def __init__(self, other_data):
self.data = []
self.other_data = other_data
...

As it turns out, we were both wrong. The real answer lay in understanding the distinction between class and instance attributes.

Python class attributes vs. Python instance attributes

Note: if you have an expert handle on class attributes, you can skip ahead to use cases.

Class Attributes

My interviewer was wrong in that the above code is syntactically valid.

I too was wrong in that it isn’t setting a “default value” for the instance attribute. Instead, it’s defining data as a class attribute with value [].

In my experience, class attributes are a topic that many people know something about, but few understand completely.

What’s the difference?

A class attribute is an attribute of the class (circular, I know), rather than an attribute of an instance of a class.

Let’s use an example to illustrate the difference. Here, class_var is a class attribute, and i_var is an instance attribute:

class MyClass(object):
class_var = 1
def __init__(self, i_var):
self.i_var = i_var

Note that all instances of the class have access to class_var, and that it can also be accessed as a property of the class itself:

foo = MyClass(2)
bar = MyClass(3)
foo.class_var, foo.i_var
## 1, 2
bar.class_var, bar.i_var
## 1, 3
MyClass.class_var ## <— This is key
## 1

For Java or C++ programmers, the class attribute is similar—but not identical—to the static member. We’ll see how they differ below.

Class vs. instance namespaces

To understand what’s happening here, let’s talk briefly about Python namespaces.

namespace is a mapping from names to objects, with the property that there is zero relation between names in different namespaces. They’re usually implemented as Python dictionaries, although this is abstracted away.

Depending on the context, you may need to access a namespace using dot syntax (e.g., object.name_from_objects_namespace) or as a local variable (e.g., object_from_namespace). As a concrete example:

class MyClass(object):
## No need for dot syntax
class_var = 1
def __init__(self, i_var):
self.i_var = i_var
## Need dot syntax as we've left scope of class namespace
MyClass.class_var
## 1

Python classes and instances of classes each have their own distinct namespaces represented by pre-defined attributes MyClass.__dict__ and instance_of_MyClass.__dict__, respectively.

When you try to access an attribute from an instance of a class, it first looks at its instance namespace. If it finds the attribute, it returns the associated value. If not, it then looks in the class namespace and returns the attribute (if it’s present, throwing an error otherwise). For example:

foo = MyClass(2)
## Finds i_var in foo's instance namespace
foo.i_var
## 2
## Doesn't find class_var in instance namespace…
## So look's in class namespace (MyClass.__dict__)
foo.class_var
## 1

The instance namespace takes supremacy over the class namespace: if there is an attribute with the same name in both, the instance namespace will be checked first and its value returned. Here’s a simplified version of the code (source) for attribute lookup:

def instlookup(inst, name):
## simplified algorithm...
if inst.__dict__.has_key(name):
return inst.__dict__[name]
else:
return inst.__class__.__dict__[name]

And, in visual form:

attribute lookup in visual form

Handling assignment

With this in mind, we can make sense of how class attributes handle assignment:

  • If a class attribute is set by accessing the class, it will override the value for all instances. For example:

    	foo = MyClass(2)
    	foo.class_var
    	## 1
    	MyClass.class_var = 2
    	foo.class_var
    	## 2
    	
    	

    At the namespace level… we’re setting MyClass.__dict__['class_var'] = 2. (Note: this isn’t the exact code(which would be setattr(MyClass, 'class_var', 2)) as __dict__ returns a dictproxy, an immutable wrapper that prevents direct assignment, but it helps for demonstration’s sake). Then, when we access foo.class_varclass_var has a new value in the class namespace and thus 2 is returned.

  • If a class variable is set by accessing an instance, it will override the value only for that instance. This essentially overrides the class variable and turns it into an instance variable available, intuitively, only for that instance. For example:

    	foo = MyClass(2)
    	foo.class_var
    	## 1
    	foo.class_var = 2
    	foo.class_var
    	## 2
    	MyClass.class_var
    	## 1
    	
    	

    At the namespace level… we’re adding the class_var attribute to foo.__dict__, so when we lookup foo.class_var, we return 2. Meanwhile, other instances of MyClass will not have class_var in their instance namespaces, so they continue to find class_var in MyClass.__dict__ and thus return 1.

Mutability

Quiz question: What if your class attribute has a mutable type? You can manipulate (mutilate?) the class attribute by accessing it through a particular instance and, in turn, end up manipulating the referenced object that all instances are accessing (as pointed out by Timothy Wiseman).

This is best demonstrated by example. Let’s go back to the Service I defined earlier and see how my use of a class variable could have led to problems down the road.

class Service(object):
data = []
def __init__(self, other_data):
self.other_data = other_data
...

My goal was to have the empty list ([]) as the default value for data, and for each instance of Service to have its own data that would be altered over time on an instance-by-instance basis. But in this case, we get the following behavior (recall that Service takes some argument other_data, which is arbitrary in this example):

s1 = Service(['a', 'b'])
s2 = Service(['c', 'd'])
s1.data.append(1)
s1.data
## [1]
s2.data
## [1]
s2.data.append(2)
s1.data
## [1, 2]
s2.data
## [1, 2]

This is no good—altering the class variable via one instance alters it for all the others!

At the namespace level… all instances of Service are accessing and modifying the same list in Service.__dict__ without making their own data attributes in their instance namespaces.

We could get around this using assignment; that is, instead of exploiting the list’s mutability, we could assign our Service objects to have their own lists, as follows:

s1 = Service(['a', 'b'])
s2 = Service(['c', 'd'])
s1.data = [1]
s2.data = [2]
s1.data
## [1]
s2.data
## [2]

In this case, we’re adding s1.__dict__['data'] = [1], so the original Service.__dict__['data'] remains unchanged.

Unfortunately, this requires that Service users have intimate knowledge of its variables, and is certainly prone to mistakes. In a sense, we’d be addressing the symptoms rather than the cause. We’d prefer something that was correct by construction.

My personal solution: if you’re just using a class variable to assign a default value to a would-be instance variable, don’t use mutable values. In this case, every instance of Service was going to override Service.data with its own instance attribute eventually, so using an empty list as the default led to a tiny bug that was easily overlooked. Instead of the above, we could’ve either:

  1. Stuck to instance attributes entirely, as demonstrated in the introduction.
  2. Avoided using the empty list (a mutable value) as our “default”:

    	class Service(object):
    	data = None
    	def __init__(self, other_data):
    	self.other_data = other_data
    	...
    	
    	

    Of course, we’d have to handle the None case appropriately, but that’s a small price to pay.

Like what you're reading?
Get the latest updates first.
No spam. Just great engineering and design posts.

So when would you use them?

Class attributes are tricky, but let’s look at a few cases when they would come in handy:

  1. Storing constants. As class attributes can be accessed as attributes of the class itself, it’s often nice to use them for storing Class-wide, Class-specific constants. For example:

    	class Circle(object):
    	pi = 3.14159
    	def __init__(self, radius):
    	self.radius = radius
    	def area(self):
    	return Circle.pi * self.radius * self.radius
    	Circle.pi
    	## 3.14159
    	c = Circle(10)
    	c.pi
    	## 3.14159
    	c.area()
    	## 314.159
    	
    	
  2. Defining default values. As a trivial example, we might create a bounded list (i.e., a list that can only hold a certain number of elements or fewer) and choose to have a default cap of 10 items:

    	class MyClass(object):
    	limit = 10
    	def __init__(self):
    	self.data = []
    	def item(self, i):
    	return self.data[i]
    	def add(self, e):
    	if len(self.data) >= self.limit:
    	raise Exception("Too many elements")
    	self.data.append(e)
    	MyClass.limit
    	## 10
    	
    	

    We could then create instances with their own specific limits, too, by assigning to the instance’s limitattribute.

    	foo = MyClass()
    	foo.limit = 50
    	## foo can now hold 50 elements—other instances can hold 10
    	
    	

    This only makes sense if you will want your typical instance of MyClass to hold just 10 elements or fewer—if you’re giving all of your instances different limits, then limit should be an instance variable. (Remember, though: take care when using mutable values as your defaults.)

  3. Tracking all data across all instances of a given class. This is sort of specific, but I could see a scenario in which you might want to access a piece of data related to every existing instance of a given class.

    To make the scenario more concrete, let’s say we have a Person class, and every person has a name. We want to keep track of all the names that have been used. One approach might be to iterate over the garbage collector’s list of objects, but it’s simpler to use class variables.

    Note that, in this case, names will only be accessed as a class variable, so the mutable default is acceptable.

    	
                        
  4. Last comment in
  5.  1 Comments


Declarative Programming: Is It A Real Thing?

Declarative programming is, currently, the dominant paradigm of an extensive and diverse set of domains such as databases, templating and configuration management.

In a nutshell, declarative programming consists of instructing a program on what needs to be done, instead of telling it how to do it. In practice, this approach entails providing a domain-specific language (DSL) for expressing what the user wants, and shielding them from the low-level constructs (loops, conditionals, assignments) that materialize the desired end state.

While this paradigm is a remarkable improvement over the imperative approach that it replaced, I contend that declarative programming has significant limitations, limitations that I explore in this article. Moreover, I propose a dual approach that captures the benefits of declarative programming while superseding its limitations.

Read the full article on Toptal



Clean Code and The Art of Exception Handling

Exceptions are as old as programming itself. Back in the days when programming was done in hardware, or via low-level programming languages, exceptions were used to alter the flow of the program, and to avoid hardware failures. Today, Wikipedia defines exceptions as:

anomalous or exceptional conditions requiring special processing – often changing the normal flow of program execution…

And that handling them requires:

specialized programming language constructs or computer hardware mechanisms.

So, exceptions require special treatment, and an unhandled exception may cause unexpected behavior. The results are often spectacular. In 1996, the famous Ariane 5 rocket launch failure was attributed to an unhandled overflow exception. History’s Worst Software Bugs contains some other bugs that could be attributed to unhandled or miss-handled exceptions.

Over time, these errors, and countless others (that were, perhaps, not as dramatic, but still catastrophic for those involved) contributed to the impression that exceptions are bad.

The results of improperly handling exceptions have led us to believe that exceptions are always bad.

But exceptions are a fundamental element of modern programming; they exist to make our software better. Rather than fearing exceptions, we should embrace them and learn how to benefit from them. In this article, we will discuss how to manage exceptions elegantly, and use them to write clean code that is more maintainable.

Exception Handling: It’s a Good Thing

With the rise of object-oriented programming (OOP), exception support has become a crucial element of modern programming languages. A robust exception handling system is built into most languages, nowadays. For example, Ruby provides for the following typical pattern:

begin
do_something_that_might_not_work!
rescue SpecificError => e
do_some_specific_error_clean_up
retry if some_condition_met?
ensure
this_will_always_be_executed
end

There is nothing wrong with the previous code. But overusing these patterns will cause code smells, and won’t necessarily be beneficial. Likewise, misusing them can actually do a lot of harm to your code base, making it brittle, or obfuscating the cause of errors.

The stigma surrounding exceptions often makes programmers feel at a loss. It’s a fact of life that exceptions can’t be avoided, but we are often taught they must be dealt with swiftly and decisively. As we will see, this is not necessarily true. Rather, we should learn the art of handling exceptions gracefully, making them harmonious with the rest of our code.

Following are some recommended practices that will help you embrace exceptions and make use of them and their abilities to keep your code maintainableextensible, and readable:

  • maintainability: Allows us to easily find and fix new bugs, without the fear of breaking current functionality, introducing further bugs, or having to abandon the code altogether due to increased complexity over time.
  • extensibility: Allows us to easily add to our code base, implementing new or changed requirements without breaking existing functionality. Extensibility provides flexibility, and enables a high level of reusability for our code base.
  • readability: Allows us to easily read the code and discover it’s purpose without spending too much time digging. This is critical for efficiently discovering bugs and untested code.

These elements are the main factors of what we might call cleanliness or quality, which is not a direct measure itself, but instead is the combined effect of the previous points, as demonstrated in this comic:

"WTFs/m" by Thom Holwerda, OSNews

With that said, let’s dive into these practices and see how each of them affects those three measures.

Note: We will present examples from Ruby, but all of the constructs demonstrated here have equivalents in the most common OOP languages.

Always create your own ApplicationError hierarchy

Most languages come with a variety of exception classes, organized in an inheritance hierarchy, like any other OOP class. To preserve the readability, maintainability, and extensibility of our code, it’s a good idea to create our own subtree of application-specific exceptions that extend the base exception class. Investing some time in logically structuring this hierarchy can be extremely beneficial. For example:

class ApplicationError < StandardError; end
# Validation Errors
class ValidationError < ApplicationError; end
class RequiredFieldError < ValidationError; end
class UniqueFieldError < ValidationError; end
# HTTP 4XX Response Errors
class ResponseError < ApplicationError; end
class BadRequestError < ResponseError; end
class UnauthorizedError < ResponseError; end
# ...

Example of an application exception hierarchy.

Having an extensible, comprehensive exceptions package for our application makes handling these application-specific situations much easier. For example, we can decide which exceptions to handle in a more natural way. This not only boosts the readability of our code, but also increases the maintainability of our applications and libraries (gems).

From the readability perspective, it’s much easier to read:

rescue ValidationError => e

Than to read:

rescue RequiredFieldError, UniqueFieldError, ... => e

From the maintainability perspective, say, for example, we are implementing a JSON API, and we have defined our own ClientError with several subtypes, to be used when a client sends a bad request. If any one of these is raised, the application should render the JSON representation of the error in its response. It will be easier to fix, or add logic, to a single block that handles ClientErrors rather than looping over each possible client error and implementing the same handler code for each. In terms of extensibility, if we later have to implement another type of client error, we can trust it will already be handled properly here.

Moreover, this does not prevent us from implementing additional special handling for specific client errors earlier in the call stack, or altering the same exception object along the way:

# app/controller/pseudo_controller.rb
def authenticate_user!
fail AuthenticationError if token_invalid? || token_expired?
User.find_by(authentication_token: token)
rescue AuthenticationError => e
report_suspicious_activity if token_invalid?
raise e
end
def show
authenticate_user!
show_private_stuff!(params[:id])
rescue ClientError => e
render_error(e)
end

As you can see, raising this specific exception didn’t prevent us from being able to handle it on different levels, altering it, re-raising it, and allowing the parent class handler to resolve it.

Two things to note here:

  • Not all languages support raising exceptions from within an exception handler.
  • In most languages, raising a new exception from within a handler will cause the original exception to be lost forever, so it’s better to re-raise the same exception object (as in the above example) to avoid losing track of the original cause of the error. (Unless you are doing this intentionally).

Never rescue Exception

That is, never try to implement a catch-all handler for the base exception type. Rescuing or catching all exceptions wholesale is never a good idea in any language, whether it’s globally on a base application level, or in a small buried method used only once. We don’t want to rescue Exception because it will obfuscate whatever really happened, damaging both maintainability and extensibility. We can waste a huge amount of time debugging what the actual problem is, when it could be as simple as a syntax error:

# main.rb
def bad_example
i_might_raise_exception!
rescue Exception
nah_i_will_always_be_here_for_you
end
# elsewhere.rb
def i_might_raise_exception!
retrun do_a_lot_of_work!
end

You might have noticed the error in the previous example; return is mistyped. Although modern editors provide some protection against this specific type of syntax error, this example illustrates how rescue Exception does harm to our code. At no point is the actual type of the exception (in this case a NoMethodError) addressed, nor is it ever exposed to the developer, which may cause us to waste a lot of time running in circles.

Never rescue more exceptions than you need to

The previous point is a specific case of this rule: We should always be careful not to over-generalize our exception handlers. The reasons are the same; whenever we rescue more exceptions than we should, we end up hiding parts of the application logic from higher levels of the application, not to mention suppressing the developer’s ability to handle the exception his or herself. This severely affects the extensibility and maintainability of the code.

If we do attempt to handle different exception subtypes in the same handler, we introduce fat code blocks that have too many responsibilities. For example, if we are building a library that consumes a remote API, handling a MethodNotAllowedError (HTTP 405), is usually different from handling an UnauthorizedError (HTTP 401), even though they are both ResponseErrors.

As we will see, often there exists a different part of the application that would be better suited to handle specific exceptions in a more DRY way.

So, define the single responsibility of your class or method, and handle the bare minimum of exceptions that satisfy this responsibility requirement. For example, if a method is responsible for getting stock info from a remote a API, then it should handle exceptions that arise from getting that info only, and leave the handling of the other errors to a different method designed specifically for these responsibilities:

def get_info
begin
response = HTTP.get(STOCKS_URL + "#{@symbol}/info")
fail AuthenticationError if response.code == 401
fail StockNotFoundError, @symbol if response.code == 404
return JSON.parse response.body
rescue JSON::ParserError
retry
end
end

Here we defined the contract for this method to only get us the info about the stock. It handles endpoint-specific errors, such as an incomplete or malformed JSON response. It doesn’t handle the case when authentication fails or expires, or if the stock doesn’t exist. These are someone else’s responsibility, and are explicitly passed up the call stack where there should be a better place to handle these errors in a DRY way.

Resist the urge to handle exceptions immediately

This is the complement to the last point. An exception can be handled at any point in the call stack, and any point in the class hierarchy, so knowing exactly where to handle it can be mystifying. To solve this conundrum, many developers opt to handle any exception as soon as it arises, but investing time in thinking this through will usually result in finding a more appropriate place to handle specific exceptions.

One common pattern that we see in Rails applications (especially those that expose JSON-only APIs) is the following controller method:

# app/controllers/client_controller.rb
def create
@client = Client.new(params[:client])
if @client.save
render json: @client
else
render json: @client.errors
end
end

(Note that although this is not technically an exception handler, functionally, it serves the same purpose, since @client.save only returns false when it encounters an exception.)

In this case, however, repeating the same error handler in every controller action is the opposite of DRY, and damages maintainability and extensibility. Instead, we can make use of the special nature of exception propagation, and handle them only once, in the parent controller classApplicationController:

# app/controllers/client_controller.rb
def create
@client = Client.create!(params[:client])
render json: @client
end

# app/controller/application_controller.rb
rescue_from ActiveRecord::RecordInvalid, with: :render_unprocessable_entity
def render_unprocessable_entity(e)
render \
json: { errors: e.record.errors },
status: 422
end

This way, we can ensure that all of the ActiveRecord::RecordInvalid errors are properly and DRY-ly handled in one place, on the base ApplicationController level. This gives us the freedom to fiddle with them if we want to handle specific cases at the lower level, or simply let them propagate gracefully.

Not all exceptions need handling

When developing a gem or a library, many developers will try to encapsulate the functionality and not allow any exception to propagate out of the library. But sometimes, it’s not obvious how to handle an exception until the specific application is implemented.

Let’s take ActiveRecord as an example of the ideal solution. The library provides developers with two approaches for completeness. The save method handles exceptions without propagating them, simply returning false, while save! raises an exception when it fails. This gives developers the option of handling specific error cases differently, or simply handling any failure in a general way.

But what if you don’t have the time or resources to provide such a complete implementation? In that case, if there is any uncertainty, it is best to expose the exception, and release it into the wild.

Sometimes the best way to handle an exception is to let it fly free.

Here’s why: We are working with moving requirements almost all the time, and making the decision that an exception will always be handled in a specific way might actually harm our implementation, damaging extensibility and maintainability, and potentially adding huge technical debt, especially when developing libraries.

Take the earlier example of a stock API consumer fetching stock prices. We chose to handle the incomplete and malformed response on the spot, and we chose to retry the same request again until we got a valid response. But later, the requirements might change, such that we must fall back to saved historical stock data, instead of retrying the request.

At this point, we will be forced to change the library itself, updating how this exception is handled, because the dependent projects won’t handle this exception. (How could they? It was never exposed to them before.) We will also have to inform the owners of projects that rely on our library. This might become a nightmare if there are many such projects, since they are likely to have been built on the assumption that this error will be handled in a specific way.

Now, we can see where we are heading with dependencies management. The outlook is not good. This situation happens quite often, and more often than not, it degrades the library’s usefulness, extensibility, and flexibility.

So here is the bottom line: if it is unclear how an exception should be handled, let it propagate gracefully. There are many cases where a clear place exists to handle the exception internally, but there are many other cases where exposing the exception is better. So before you opt into handling the exception, just give it a second thought. A good rule of thumb is to only insist on handling exceptions when you are interacting directly with the end-user.

Follow the convention

The implementation of Ruby, and, even more so, Rails, follows some naming conventions, such as distinguishing between method_names and method_names! with a “bang.” In Ruby, the bang indicates that the method will alter the object that invoked it, and in Rails, it means that the method will raise an exception if it fails to execute the expected behavior. Try to respect the same convention, especially if you are going to open-source your library.

If we were to write a new method! with a bang in a Rails application, we must take these conventions into account. There is nothing forcing us to raise an exception when this method fails, but by deviating from the convention, this method may mislead programmers into believing they will be given the chance to handle exceptions themselves, when, in fact, they will not.

Another Ruby convention, attributed to Jim Weirich, is to use fail to indicate method failure, and only to use raise if you are re-raising the exception.

“An aside, because I use exceptions to indicate failures, I almost always use the “fail” keyword rather than the “raise” keyword in Ruby. Fail and raise are synonyms so there is no difference except that “fail” more clearly communicates that the method has failed. The only time I use “raise” is when I am catching an exception and re-raising it, because here I’m not failing, but explicitly and purposefully raising an exception. This is a stylistic issue I follow, but I doubt many other people do.”

Many other language communities have adopted conventions like these around how exceptions are treated, and ignoring these conventions will damage the readability and maintainability of our code.

Logger.log(everything)

This practice doesn’t solely apply to exceptions, of course, but if there’s one thing that should always be logged, it’s an exception.

Logging is extremely important (important enough for Ruby to ship a logger with its standard version). It’s the diary of our applications, and even more important than keeping a record of how our applications succeed, is logging how and when they fail.

There is no shortage of logging libraries or log-based services and design patterns. It’s critical to keep track of our exceptions so we can review what happened and investigate if something doesn’t look right. Proper log messages can point developers directly to the cause of a problem, saving them immeasurable time.

That Clean Code Confidence

Proper exception handling allows for clean code and successful software.

Clean exception handling will send your code quality to the moon!
 

Exceptions are a fundamental part of every programming language. They are special and extremely powerful, and we must leverage their power to elevate the quality of our code instead of exhausting ourselves fighting with them.

In this article, we dived into some good practices for structuring our exception trees and how it can be beneficial for readability and quality to logically structure them. We looked at different approaches for handling exceptions, either in one place or on multiple levels.

We saw that it’s bad to “catch ‘em all”, and that it’s ok to let them float around and bubble up.

We looked at where to handle exceptions in a DRY manner, and learned that we are not obligated to handle them when or where they first arise.

We discussed when exactly it is a good idea to handle them, when it’s a bad idea, and why, when in doubt, it’s a good idea to let them propagate.

Finally, we discussed other points that can help maximize the usefulness of exceptions, such as following conventions and logging everything.

With these basic guidelines, we can feel much more comfortable and confident dealing with error cases in our code, and making our exceptions truly exceptional!

Special thank to Avdi Grimm and his awesome talk Exceptional Ruby, which helped a lot in the making of this article.

This article was written by AHMED ABDELRAZZAK, a Toptal SQL developer.



Introduction To Concurrent Programming: A Beginner's Guide

What is concurrent programing? Simply described, it’s when you are doing more than one thing at the same time. Not to be confused with parallelism, concurrency is when multiple sequences of operations are run in overlapping periods of time. In the realm of programming, concurrency is a pretty complex subject. Dealing with constructs such as threads and locks and avoiding issues like race conditions and deadlocks can be quite cumbersome, making concurrent programs difficult to write. Through concurrency, programs can be designed as independent processes working together in a specific composition. Such a structure may or may not be made parallel; however, achieving such a structure in your program offers numerous advantages.

Introduction To Concurrent Programming

Read the full article on Toptal.



The 5 Most Common UI Design Mistakes

Although the title UI Designer suggests a sort of departure from the traditional graphic designer, UI design is still a part of the historical trajectory of the visual design discipline.

With each movement or medium, the discipline has introduced new graphic languages, layouts, and design processes. Between generations, the designer has straddled the transition from press to xerox, or paper to pixel. Across these generations, graphic design has carried out the responsibility of representing the visual language of each era respectively.

Therefore, as UI Design makes the transition out of its infancy, what sort of graphic world can we expect to develop? Unfortunately, based on the current trajectory, the future may look bleak. Much of UI Design today has become standardized and repeatable. Design discussions online involve learning the rules to get designs to safely work, rather than push the envelope, or imagine new things. The tendency for UI Designers to resort to patterns and trends has not only created a bland visual environment, but also diminished the value of the designer as processes become more and more formulaic. The issue is precisely not one of technicalities, but of impending visual boredom.

Thus, the Top Five Common UI Design mistakes are:

  • Following Design Rules
  • Abusing the Grid
  • Misunderstanding Typefaces
  • Patterns and the Standardization of UI Design
  • Finding Safety in Contrast

UI Design Rule Book

Understand principles and be creative within their properties. Following the rules will only take your where others have been.

Common Mistake #1: UI Designers Follow the Rules

The world of graphic design has always followed sets of rules and standards. Quite often in any design discipline, the common mistakes that are made can closely coincide with a standard rule that has been broken. Thus, from this perspective the design rules seem to be pretty trustworthy to follow.

However, in just about any design discipline, new design movements and creative innovation has generally resulted from consciously breaking said rule book. This is possible because design is really conditional, and requires the discretion of the designer, rather than a process with any sort of finite answers. Therefore, the design rules should likely be considered as guidelines more so rather than hard and fast rules. The experienced designer knows and respects the rule book just enough to be able break the box.

Unfortunately, the way that design is often discussed online is within sets of do’s and don’ts. Top mistakes and practices for design in 10 easy steps! Design isn’t so straightforward, and requires a much more robust understanding of principles and tendencies, rather than checklists to systematically carry out.

The concern is that if designers were to cease ‘breaking the rules’, then nothing new creatively would ever be made. If UI designers only develop their ability to follow guidelines, rather than make their own decisions, then they may quickly become irrelevant. How else will we argue a value greater than off the shelf templates?

Be Wary of Top Ten Design Rules

The issue with design rules in today’s UI design community is they are so abundant. In the interest of solving any problem, the designer can look to the existing UI community and their set of solutions, rather than solve an issue on their own. However, the abundance of these guides and rules have made themselves less credible.

A google search for “Top UI Design Mistakes” yields a half million search results. So, what are the chances that most, if any of these authors of various articles agree with one another? Or, will each design tip that is discussed coincide accurately with the design problems of a reader?

Often the educational articles online discuss acute problems, rather than the guiding design principles behind the issue. The result is that new designers will never learn why design works the way that it does. Instead, they only become able to copy what has come before. Isn’t it concerning that in none of these sorts of articles is something like play encouraged?

The designer should have a tool kit of principles to guide them, rather than a book of rules to follow predetermined designs. Press x for parallax scrolling and y for carousels. Before choosing, refer to most recent blog post on which navigational tool is trending. Boring!

Trends are like junk food for designers. Following trends produces cheap designs that may offer some initial pay back, but little worth in the long run. This means that not only may trendy designers become dated, or ineffective quickly. But, for you the designer, don’t expect to experience any sense of reward when designing in this way. Although working to invent your own styles and systems is a lot of work, it’s so worth it day in and day out. There’s just something about copying that never seems to feed the soul.

Common Mistake #2: Allowing the Grid to Restrict UI Design

Despite my treatise against rules - here’s a rule: there is no way for a UI Designer to design without a grid. The web or mobile interface is fundamentally based on a pixel by pixel organization - there’s no way around it. However, this does not necessarily mean that the interface has to restrict designers to gridded appearances, or even gridded processes.

Using the Grid as a Trendy Tool

Generally, making any design moves as a response to trends can easily lead to poor design. Perhaps what results is a satisfactory, mostly functional product. But it will almost certainly be boring or uninteresting. To be trendy is to be commonplace. Therefore, when employing the grid in a design, understand what the grid has to offer as a tool, and what it might convey. Grids generally represent neutrality, as everything within the restraints of a grid appear equal. Grids also allow for a neutral navigational experience. Users can jump from item to item without any interference from the designer’s curatorial hand. Whereas, with other navigational structures, the designer may be able to group content, or establish desired sequences.

UI Design Rule Book

Although a useful tool, the grid can be very limiting to designers.

Defaulting to the Grid as a Work Flow

Dylan Fracareta, faculty of RISD and director of PIN-UP Magazine, points out that “most people start off with a 12 - column grid…because you can get 3 and 4 off of that”. The danger here is that immediately the designer predetermines anything that they might come up with. Alternatively, Fracareta resides to only using the move tool with set quantities, rather than physically placing things against a grid line. Although this establishes order, it opens up more potential for unexpected outcomes. Although designing for the browser used to mean that we would input some code, wait, and see what happens. Now, web design has returned to a more traditional form of layout designer that’s “more like adjusting two sheets of transparent paper”. How can we as designers benefit from this process? Working Without a Grid Although grids can be restricting, they are one of our most traditional forms of organization. The grid is intuitive. The grid is neutral and unassuming. Therefore, grids allow content to speak for itself, and for users to navigate at their will and with ease. Despite my warnings towards the restrictiveness of grids, different arrays allow for different levels of guidance or freedom.

Common Mistake #3:The Standardization of UI Design with Patterns

The concept of standardized design elements predates UI design. Architectural details have been frequently repeated in practice for typical conditions for centuries. Generally this practice makes sense for certain parts of a building that are rarely perceived by a user. However, once architects began to standardize common elements like furniture dimensions, or handrails heights, people eventually expressed disinterest in the boring, beige physical environment that resulted. Not only this, but standardized dimensions were proven to be ineffective, as although generated as an average, they didn’t really apply to the majority of the population. Thus, although repeatable detail have their place, they should be used critically.

If we as designers choose to automate, what value are we providing?

If we as designers choose to automate, what value are we providing?

Designers Using the Pattern as Product

Many UI designers don’t view the pattern as a time saving tool, but rather an off the shelf solution to design problems. Patterns are intended to take recurring tasks or artefacts and standardize them in order to make the designer’s job easier. Instead, certain patterns like F Pattern Layouts, Carousels or Pagination have become the entire structure of many of our interfaces.

Justification for the Pattern is Skewed

Designers tell themselves that the F shaped pattern exists as a result of the way that people read on the web.Espen Brunborg points out that perhaps people read this way as a result of us designing for that pattern. “What’s the point of having web designers if all they do is follow the recipe,” Brunborg asks.

Common Mistake #4: Misunderstanding Typefaces

Many designer’s quick tips suggest hard and fast rules about fonts as well. Each rule is shouted religiously, “One font family only! Monospaced fonts are dead! Avoid thin fonts at all costs!”. But really, the only legitimate rules on type, text and fonts should be to enforce legibility, and convey meaning. As long as type is legible, there may very well be an appropriate opportunity for all sorts of typefaces. The UI Designer must take on the responsibility of knowing the history, uses, and designed intentions for each font that they implement in a UI.

Consider a Typeface Only for Legibility

Typefaces convey meaning as well as affect legibility. With all of the discussion surrounding rules for proper legibility on devices etc, designers are forgetting that type is designed to augment a body of text with a sensibility, as much as it is meant to be legible. Legibility is critical, I do not dispute this - but my point is that legibility really should be an obvious goal. Otherwise, why wouldn’t we have just stopped at Helvetica, or maybe Highway Gothic. However, the important thing to remember is that fonts are not just designed for different contexts of legibility. Typefaces are also essential for conveying meaning or giving a body of text a mood.

Typefaces are each designed for their own uses. Don't allow narrow minded rules to restrict an exploration of the world of type.

Avoiding Thin Fonts At All Costs

Now that the trend has come (and almost gone?), a common design criticism is to avoid thin fonts entirely. In the same way thin fonts came as a trend, they may leave as one also. However, the hope should be to understand the principles of the typefaces rather than follow trends at all.

Some say that they’re impossible to read or untrustworthy between devices. All legitimate points. Yet, this represents a condition in the current discussion of UI design. The font choice is only understood by designers as technical choice in regards to legibility, rather than also understanding the meaning and value of typefaces. The concern is that if legibility is the only concern that a designer carried, would thin fonts be done away with entirely?

Understand why you are using a thin font, and within what contexts. Bold, thick text is actually much more difficult to read at length than thinner fonts. Yet, as bold fonts carry more visual weight they’re more appropriate for headings, or content with little text. As thin fonts are often serifs, its suitability for body text is entirely objective. As serif characters flow together when read in rapid succession, they make for much more comfortable long reading.

As well, thin fonts are often chosen because they convey elegance. So, if a designer was working on an interface for a client whose mandate was to convey elegance, they might find themselves hard pressed to find a heavy typeface to do the job.

Not Enough Variation

A common mistake is to not provide enough variation between fonts in an interface. Changing fonts is a good navigational tool to establish visual hierarchy, or potentially different functions within an interface. A crash course on hierarchy will teach you that generally the largest items, or boldest fonts, should be the most important, and carry the most visual weight. Visual importance can convey content headings, or perhaps frequently used functions.

Too Much Variation

A common UI Design mistake is to load in several different typefaces from different families that each denote a unique function. The issue with making every font choice special, when there is many fonts, is that no font stands out. Changing fonts is a good navigational tool to establish visual hierarchy, or potentially different functions within an interface. Therefore, if every font is different, there is too much confusion for a user to recognize any order.

Common Mistake #5: Under/Over Estimating the Potential of Contrast

A common mistake that appears on many Top UI Design Mistake lists is that designers should avoid low contrast interfaces. There are many instances in which low contrast designs are illegible and ineffective - true. However, as with the previous points, my worry is that this use of language alternatively produces a high contrast design culture in response.

Defaulting to High Contrast

The issue is that high contrast is aesthetically easy to achieve. High contrast visuals are undeniably stimulating or exciting. However, there are many more moods in the human imagination to convey or communicate with, other than high stimulation. To be visually stimulating may also be visually safe.

The same issue is actually occurring in sci-fi film. The entire industry has resorted to black and neon blue visuals as a way to trick viewers into accepting ‘exciting’ visuals, instead of new, creative, or beautiful visuals. This article points out what the sci-fi industry is missing out on by producing safe visuals.

Functionally, if every element in an interface is in high contrast to another, then nothing stands out. This defeats the potential value of contrast as a hierarchical tool. Considering different design moves as tools, rather than rules to follow is essential in avoiding stagnant, trendy design.

Illegibly Low Contrast

The use of low contrast fonts and backgrounds is a commonly made mistake. However, rather than being a design issue. This could potentially be discussed as a beta testing mistake, rather than a design mistake.

How the design element relates as a low contrast piece to the rest of the interface is a design concern. The issue could be that the most significant item hierarchically is low in contrast to the rest of the interface. For the interface to communicate its organizational structure, the elements should contrast one another in a certain way. This is a design discussion. Whether or not it is legible is arguably a testing mistake.

The point is that in only discussing contrast as a technical issue resolvable by adjusting a value, designers miss out on the critical understanding of what contrast is principally used for.

Conclusion

As with the previous 4 mistakes, the abuse of patterns will rarely result in a dysfunctional website, but rather just a boring one. The mistake is in being safe. This overly cautious method of design may not cause the individual project to fail. However, this series of safe mistakes performed by the greater web community can mean greater failures beyond the individual UI design project. The role of the designer should be to imagine, thoughtfully experiment and create - not to responsibly follow rules and guidelines.

The original article is from Toptal. Find more UI resources here



Computational Geometry in Python: From Theory to Application

When people think computational geometry, in my experience, they typically think one of two things:

  1. Wow, that sounds complicated.
  2. Oh yeah, convex hull.

In this post, I’d like to shed some light on computational geometry, starting with a brief overview of the subject before moving into some practical advice based on my own experiences (skip ahead if you have a good handle on the subject).

What’s all the fuss about?

While convex hull computational geometry algorithms are typically included in an introductory algorithms course, computational geometry is a far richer subject that rarely gets sufficient attention from the average developer/computer scientist (unless you’re making games or something).

Theoretically intriguing…

From a theoretical standpoint, the questions in computational geometry are often exceedingly interesting; the answers, compelling; and the paths by which they’re reached, varied. These qualities alone make it a field worth studying, in my opinion.

For example, consider the Art Gallery Problem: We own an art gallery and want to install security cameras to guard our artwork. But we’re under a tight budget, so we want to use as few cameras as possible. How many cameras do we need?

When we translate this to computational geometric notation, the ‘floor plan’ of the gallery is just a simple polygon. And with some elbow grease, we can prove that n/3 cameras is always sufficient for a polygon on nvertices, no matter how messy it is. The proof itself uses dual graphs, some graph theory, triangulations, and more.

Here, we see a clever proof technique and a result that is curious enough to be appreciated on its own. But if theoretical relevance isn’t enough for you…

And important in-practice

As I mentioned earlier, game development relies heavily on the application of computational geometry (for example, collision detection often relies on computing the convex hull of a set of objects); as do geographic information systems (GIS), which are used for storing and performing computations on geographical data; and robotics, too (e.g., for visibility and planning problems).

Why’s it so tough?

Let’s take a fairly straightforward computational geometry problem: given a point and a polygon, does the point lie inside of the polygon? (This is called the point-in-polygon, or PIP problem.)

PIP does a great job of demonstrating why computational geometry can be (deceptively) tough. To the human eye, this isn’t a hard question. We see the following diagram and it’s immediately obvious to us that the point is in the polygon:

This point-in-polygon problem is a good example of computational geometry in one of its many applications.

Even for relatively complicated polygons, the answer doesn’t elude us for more than a second or two. But when we feed this problem to a computer, it might see the following:

poly = Polygon([Point(0, 5), Point(1, 1), Point(3, 0),
Point(7, 2), Point(7, 6), Point(2, 7)])
point = Point(5.5, 2.5)
poly.contains(point)

What is intuitive to the human brain does not translate so easily to computer language.

More abstractly (and ignoring the need to represent these things in code), the problems we see in this discipline are very hard to rigorize (‘make rigorous’) in a computational geometry algorithm. How would we describe the point-in-polygon scenario without using such tautological language as ‘A point is inside a polygon if it is inside the polygon’? Many of these properties are so fundamental and so basic that it is difficult to define them concretely.

How would we describe the point-in-polygon scenario without using such tautological language as 'it's inside the polygon if it's inside the polygon'?

Difficult, but not impossible. For example, you could rigorize point-in-polygon with the following definitions:

  • A point is inside a polygon if any infinite ray beginning at the point intersects with an odd number of polygon edges (known as the even-odd rule).
  • A point is inside a polygon if it has a non-zero winding number (defined as the number of times that the curve defining the polygon travels around the point).

Unless you’ve had some experience with computational geometry, these definitions probably won’t be a part of your existing vocabulary. And perhaps that’s emblematic of how computational geometry can push you to think differently.

Introducing CCW

Now that we have a sense for the importance and difficulty of computational geometry problems, it’s time to get our hands wet.

At the backbone of the subject is a deceptively powerful primitive operation: counterclockwise, or ‘CCW’ for short. (I’ll warn you now: CCW will pop up again and again.)

CCW takes three points A, B, and C as arguments and asks: do these three points compose a counterclockwise turn (vs. a clockwise turn)? In other words, is A -> B -> C a counterclockwise angle?

For example, the green points are CCW, while the red points are not:

This computational geometry problem requires points both clockwise and counterclockwise.

Why CCW Matters

CCW gives us a primitive operation on which we can build. It gives us a place to start rigorizing and solving computational geometry problems.

To give you a sense for its power, let’s consider two examples.

Determining Convexity

The first: given a polygon, can you determine if it’s convex? Convexity is an invaluable property: knowing that your polygons are convex often lets you improve performance by orders of magnitude. As a concrete example: there’s a fairly straightforward PIP algorithm that runs in Log(n) time for convex polygons, but fails for many concave polygons.

Intuitively, this gap makes sense: convex shapes are ‘nice’, while concave shapes can have sharp edges jutting in and out—they just don’t follow the same rules.

A simple (but non-obvious) computational geometry algorithm for determining convexity is to check that every triplet of consecutive vertices is CCW. This takes just a few lines of Python geometry code (assuming that the points are provided in counterclockwise order—if points is in clockwise order, you’ll want all triplets to be clockwise):

class Polygon(object):
...
def isConvex(self):
for i in range(self.n):
# Check every triplet of points
A = self.points[i % self.n]
B = self.points[(i + 1) % self.n]
C = self.points[(i + 2) % self.n]
if not ccw(A, B, C):
return False
return True

Try this on paper with a few examples. You can even use this result to define convexity. (To make things more intuitive, note that a CCW curve from A -> B -> C corresponds to an angle of less than 180º, which is a widely taught way to define convexity.)

Line Intersection

As a second example, consider line segment intersection, which can also be solved using CCW alone:

def intersect(a1, b1, a2, b2):
"""Returns True if line segments a1b1 and a2b2 intersect."""
return ccw(a1, b1, a2) != ccw(a1, b1, b2) and ccw(a2, b2, a1) != ccw(a2, b2, b1)

Why is this the case? Line segment intersection can also be phrased as: given a segment with endpoints A and B, do the endpoints C and D of another segment lie on the same side of AB? In other words, if the turns from A -> B -> C and A -> B -> D are in the same direction, the segments can’t intersect. When we use this type of language, it becomes clear that such a problem is CCW’s bread and butter.

A Rigorous Definition

Now that we have a taste for the importance of CCW, let’s see how it’s computed. Given points A, B, and C:

def ccw(A, B, C):
"""Tests whether the turn formed by A, B, and C is ccw"""
return (B.x - A.x) * (C.y - A.y) > (B.y - A.y) * (C.x - A.x)

To understand where this definition comes from, consider the vectors AB and BC. If we take their cross product, AB x BC, this will be a vector along the z-axis. But in which direction (i.e, +z or -z)? As it turns out, if the cross product is positive, the turn is counterclockwise; otherwise, it’s clockwise.

This definition will seem unintuitive unless you have a really good understanding of linear algebra, the right-hand rule, etc. But that’s why we have abstraction—when you think CCW, just think of its intuitive definition rather than its computation. The value will be immediately clear.

My Dive Into Computational Geometry and Programming Using Python

Over the past month, I’ve been working on implementing several computational geometry algorithms in Python. As I’ll be drawing on them throughout the next few sections, I’ll take a second to describe my computational geometry applications, which can be found on GitHub.

Note: My experience is admittedly limited. As I’ve been working on this stuff for months rather than years, take my advice with a grain of salt. That said, I learned much in those few months, so I hope these tips prove useful.

Read the full article in Toptal Engineering blog 



Top Ten Front-End Design Rules For Developers

As front-end developers, our job is, essentially, to turn designs into reality via code. Understanding, and being competent in, design is an important component of that. Unfortunately, truly understanding front-end design is easier said than done. Coding and aesthetic design require some pretty different skill sets. Because of that, some front-end devs aren’t as proficient in the design aspect as they should be, and as a result, their work suffers.

My goal is to give you some easy-to-follow rules and concepts, from one front-end dev to another, that will help you go from start to finish of a project without messing up what your designers worked so hard on (or possibly even allowing you to design your own projects with decent results).

Of course, these rules won’t take you from bad to magnificent in the time it takes to read one article, but if you apply them to your work, they should make a big difference.

Do Stuff In A Graphics Program

It’s truly rare that you complete a project, and go from start to finish while maintaining every single aesthetic mutation in the design files. And, unfortunately, designers aren’t always around to run to for a quick fix.

Therefore, there always comes a point in any front-end job where you end up having to make some aesthetic-related tweaks. Whether it’s making the checkmark that shows when you check the checkbox, or making a page layout that the PSD missed, front-enders often end up handling these seemingly minor tasks. Naturally, in a perfect world this wouldn’t be the case, but I have yet to find a perfect world, hence we need to be flexible.

A good front-end developer has to use professional graphics tools. Accept no substitute.

A good front-end developer has to use professional graphics tools. Accept no substitute.

For these situations, you should always use a graphics program for mockups. I don’t care which tool you choose: Photoshop, Illustrator, Fireworks, GIMP, whatever. Just don’t just attempt to design from your code. Spend a minute launching a real graphics program and figuring out how it should look, then go to the code and make it happen. You may not be an expert designer, but you’ll still end up with better results.

Match the Design, Don’t Try To Beat It

Your job is not to impress with how unique your checkmark is; your job is to match it to the rest of the design.

Those without a lot of design experience can easily be tempted to leave their mark on the project with seemingly minor details. Please leave that to the designers.

Developers have to match the original front-end design as closely as possible.

Developers have to match the original front-end design as closely as possible.

Instead of asking “Does my checkmark look amazing?” you should be asking, “How well does my checkmark match the design?”

Your focus should always be on working with the design, not on trying to outdo it.

Typography Makes All the Difference

You’d be surprised to know how much of the end look of a design is influenced by typography. You’d be just as surprised to learn how much time designers spend on it. This is not a “pick-it-and-go” endeavor, some serious time and effort goes into it.

If you end up in a situation where you actually have to choose typography, you should spend a decent amount of time doing so. Go online and research good font pairings. Spend a few hours trying those pairings and making sure you end up with the best typography for the project.

Is this font right for your project? When in doubt, consult a designer.

Is this font right for your project? When in doubt, consult a designer.

If you’re working with a design, then make sure you follow the designer’s typography choices. This doesn’t just mean choosing the font, either. Pay attention to the line spacing, letter spacing, and so on. Don’t overlook how important it is to match the typography of the design.

Also, make sure you use the right fonts in the correct spot. If the designer uses Georgia for headers only and Open Sans for body, then you shouldn’t be using Georgia for body and Open Sans for headers. Typography can make or break aesthetics easily. Spend enough time making sure you are matching your designer’s typography. It will be time well spent.

Front-end Design Doesn’t Tolerate Tunnel Vision

You’ll probably be making small parts of the overall design.

Tunnel vision is a common pitfall for front-end developers. Don’t focus on a single detail, always look at the big picture.

Tunnel vision is a common pitfall for front-end developers. Don’t focus on a single detail, always look at the big picture.

An example I’ve been going with is making the checkmark for a design that includes custom checkboxes, without showing them checked. It’s important to remember that the parts you are making are small parts of an overall design. Make your checks as important as a checkmark on a page should look, no more, no less. Don’t get tunnel vision about your one little part and make it something it shouldn’t be.

In fact, a good technique for doing this is to take a screenshot of the program so far, or of the design files, and design within it, in the context in which it will be used. That way, you really see how it affects other design elements on the page, and whether it fits its role properly.

Relationships And Hierarchy

Pay special attention to how the design works with hierarchy. How close are the titles to the body of text? How far are they from the text above them? How does the designer seem to be indicating which elements/titles/text bodies are related and which aren’t? They’ll commonly do these things by boxing related content together, using varying white space to indicate relationships, using similar or contrasting colors to indicate related/unrelated content, and so on.

A good front-end developer will respect design relationships and hierarchy. A great developer will understand them.

A good front-end developer will respect design relationships and hierarchy. A great developer will understand them.

It’s your job to make sure that you recognize the ways in which the design accomplishes relationships and hierarchy and to make sure those concepts are reflected in the end product (including for content that was not specifically designed, and/or dynamic content). This is another area (like typography) where it pays to take extra time to make sure you’re doing a good job.

Be Picky About Whitespace And Alignment

This is a great tip for improving your designs and/or better implementing the designs of others: If the design seems to be using spacings of 20 units, 40 units, etc., then make sure every spacing is a multiple of 20 units.

This is a really drop-dead simple way for someone with no eye for aesthetics to make a significant improvement quickly. Make sure your elements are aligned down to the pixel, and that the spacing around every edge of every element is as uniform as possible. Where you can’t do that (such as places where you need extra space to indicate hierarchy), make them exact multiples of the spacing you’re using elsewhere, for example two times your default to create some separation, three times to create more, and so on.

Do your best to understand how the designer used whitespace and follow those concepts in your front-end build.

Do your best to understand how the designer used whitespace and follow those concepts in your front-end build.

A lot of devs achieve this for specific content in the design files, but when it comes to adding/editing content, or implementing dynamic content, the spacing can go all over the place because they didn’t truly understand what they were implementing.

Do your best to understand how the designer used whitespace and follow those concepts in your build. And yes, spend time on this. Once you think your work is done, go back and measure the spacing to ensure you have aligned and uniformly spaced everything as much as possible, then try out the code with lots of varying content to make sure it’s flexible.

If You Don’t Know What You’re Doing, Do Less

I’m not one of those people that thinks every project should use minimalist design, but if you’re not confident in your design chops and you need to add something, then less is more.

Less is more. If your designer did a good job to begin with, you should refrain from injecting your own design ideas.

Less is more. If your designer did a good job to begin with, you should refrain from injecting your own design ideas.

The designer took care of the main stuff; you only need to do minor fillers. If you’re not very good at design, then a good bet is to do as minimal amount as you can to make that element work. That way, you’re injecting less of your own design into the designer’s work, and affecting it as little as possible.

Let the designer’s work take center stage and let your work take the back seat.

Time Makes Fools Of Us All

I’ll tell you a secret about designers: 90 percent (or more) of what they actually put down on paper, or a Photoshop canvas, isn’t that great.

They discard far more than you ever see. It often takes many revisions and fiddling with a design to get it to the point where they’d even let the guy in the next cubicle see their work, never mind the actual client. You usually don’t go from a blank canvas to good design in one step; there’s a bunch iterations in between. People rarely make good work until they understand that and allow for it in their process.

If you think the design can be improved upon, consult your designer. It’s possible they already tried a similar approach and decided against it.

If you think the design can be improved upon, consult your designer. It’s possible they already tried a similar approach and decided against it.

So how do you implement this? One important method is taking time between versions. Work until it looks like something you like then put it away. Give it a few hours (leaving it overnight is even better), then open it up again and take a look. You’ll be amazed at how different it looks with fresh eyes. You’ll quickly pick out areas for improvement. They’ll be so clear you’ll wonder how you possibly missed them in the first place.

In fact, one of the better designers I’ve known takes this idea a lot further. He would start by making three different designs. Then, he’d wait at least 24 hours, look at them again and throw them all out and start from scratch on a fourth. Next, he’d allow a day between each iteration as it got better and better. Only when he opened it up one morning, and was totally happy, or at least, as close as a designer ever gets to totally happy, would he send it to the client. This was the process he used for every design he made, and it served him very well.

I don’t expect you to take it that far, but it does highlight how helpful time without “eyes on the design” can be. It’s an integral part of the design process and can make improvements in leaps and bounds.

Pixels Matter

You should do everything in your power to match the original design in your finished program, down to the last pixel.

Front-end developers should try to match the original design down to the last pixel.

Front-end developers should try to match the original design down to the last pixel.

In some areas you can’t be perfect. For example, your control over letter-spacing might not be quite as precise as that of the designer’s, and a CSS shadow might not exactly match a Photoshop one, but you should still attempt to get as close as possible. For many aspects of the design, you really can get pixel-perfect precision. Doing so can make a big difference in the end result. A pixel off here and there doesn’t seem like much, but it adds up and affects the overall aesthetic much more than you’d think. So keep an eye on it.

There are a number of [tools] that help you compare original designs to end results, or you can just take screenshots and paste them into the design file to compare each element as closely as possible. Just lay the screenshot over the design and make it semi-transparent so that you can see the differences. Then you know how much adjustment you have to make to get it spot on.

Get Feedback

It’s hard to gain an “eye for design.” It’s even harder to do it on your own. You should seek the input of othersto really see how you can make improvements.

I am not suggesting you grab your neighbor and ask for advice, I mean you should consult real designers and let them critique your work and offer suggestions.

Let designers critique your work. Put their criticism to good use and don’t antagonize them.

Let designers critique your work. Put their criticism to good use and don’t antagonize them.

It takes some bravery to do so, but in the end it is one of the most powerful things you can do to improve the project in the short-term, and to improve your skill level in the long run.

Even if all you have to fine tune is a simple checkmark, there are plenty of people willing to help you. Whether it’s a designer friend, or an online forum, seek out qualified people and get their feedback.

Build a long-lasting, productive relationship with your designers. It’s vital for useful feedback, quality, and execution.

Build a long-lasting, productive relationship with your designers. It’s vital for useful feedback, quality, and execution.

It may sound time consuming, and may cause friction between you and your designers, but in the big scheme of things, it’s worth it. Good front-end developers rely on valuable input from designers, even when it’s not something they like to hear.

Therefore, it’s vital to build and maintain a constructive relationship with your designers. You’re all in the same boat, so to get the best possible results you have to collaborate and communicate every step of the way. The investment in building bonds with your designers is well worth it, as it will help everyone do a better job and execute everything on time.

Conclusion

To summarize, here is a short list of design tips for front-end developers:

  • Design in a graphics program. Don’t design from code, not even the small stuff.
  • Match the design. Be conscious of the original design and don’t try to improve it, just match it.
  • Typography is huge. The time you spend making sure it’s right should reflect its importance.
  • Avoid tunnel vision. Make sure your additions stand out only as much as they should. They’re not more important just because you designed them.
  • Relationships and hierarchy: Understand how they work in the design so that you can implement them properly.
  • Whitespace and alignment are important. Make them accurate to the pixel and make them evenly throughout anything you add.
  • If you’re not confident in your skills, then make your additions as minimally styled as you can.
  • Take time between revisions. Come back later to see your design work with fresh eyes.
  • Pixel-perfect implementation is important wherever possible.
  • Be brave. Seek out experienced designers to critique your work.

Not every front-end developer is going to be a fantastic designer, but every front-end dev should at least becompetent in terms of design.

You need to understand enough about design concepts to identify what’s going on, and to properly apply the design to your end product. Sometimes, you can get away with blind copying if you’ve got a thorough designer (and if you’re detail oriented enough to truly copy it pixel for pixel).

However, in order to make large projects shine across many variations of content, you need some understanding of what’s going through the designer’s head. You don’t merely need to see what the design looks like, you need to know why it looks the way it does, and that way you can be mindful of technical and aesthetic limitations that will affect your job.

So, even as a front-end developer, part of your regular self-improvement should always include learning more about design. 

The original article was written by BRYAN GREZESZAK - FREELANCE SOFTWARE ENGINEER @ TOPTAL and can be read here.




Copyright(c) 2017 - PythonBlogs.com
By using this website, you signify your acceptance of Terms and Conditions and Privacy Policy
All rights reserved