concurrency: part 2 - actors

Posted by anton
on Friday, September 19, 2008

message-passing

if shared memory makes concurrent programming difficult, what else is there that an app developer can use?

one way of representing coarse-grained parallelism is through message-passing concurrency.

the idea is pretty simple – the only way to share state between isolated components is through message passing. what happens inside the components, is their own business.

there is no global state of the whole system, unlike in shared memory program that behaves like a giant state machine. sending and receiving of messages happens concurrently together with the computations by the components themselves. this approach is much easier for the developer to reason about and it maps easily to multiple CPUs and multiple machines.

a lot of the enterprise workflow-like processing falls into this model. it is basically a pipeline of worker pools, configured independently and possibly running on separate machines. a unit of work travels through the pipeline, as it is being handed off from one set of workers to another.

actors

one of the common implementations of message-passing concurrency is actor model. i’ll take a liberty to interpret this model to fit my own developer needs, even though i am butchering decades of academic research in the process.

actors model represents well multiple computers talking to each other using network packets, components exchanging messages using JMS, independent processes or threads talking to each other using messages – basically anything where isolation is possible and interactions are loosely coupled.

usually each actor has a mailbox associated with it (often represented with a queue), where messages are stored until an actor processes them. messages map well to physical artifacts in the real world – they are immutable, and only one actor can handle a given message at a time.

actors are connected with channels; individual actors are isolated from each other – a failure of an actor does not affect another actor. no other actor can change the inner state of a given actor – the only way to communicate is through message-passing.

messaging is usually asynchronous, but synchronous messaging could also be useful.

depending on implementation, beware of deadlocks if you are using synchronous messaging. another issue to keep in mind is order of messages – depending on implementation it might not be preserved.

while some advocate “everything is an actor” approach, and I get dizzy imagining the possibilities, the pragmatic app developer in me is living in the real world among existing apps. in this case actors work best as a library for the existing language.

erlang

although i shied away from “actors everywhere” approach above, erlang is the most successful implementation that actually does just that. it is not just the language, but a whole platform that transparently runs actors within a single process as well as across multiple machines.

as this topic is heating up, one should at least read the book and play with the language. after all, a language that doesn’t affect the way you think about programming is not worth knowing, and erlang is enough of a paradigm shift to kickstart your concurrency thinking.

Tibco BusinessWorks

as i’ve described before, BusinessWorks (BW) is an example of an integration DSL that happens to use actors.

given an integration process (e.g. receive a message on JMS queue A, enrich it from a database, transform it, and send it to a JMS topic B), you describe it using BW language constructs. then it becomes an actor definition that you can deploy on an engine (really a managed JVM instance). there could be multiple engines running on multiple machines, and each engine can have many process instances (aka actors in our terminology) running inside of it. a process instance gets created from a process definition whenever a new message arrives on a queue (mailbox in actors’ terminology).

a scheduler inside the individual engine takes care of creating process instances (there could be thousands) and scheduling them on the worker threads.

all of this mapping happens at deploy time, as a developer you do not worry about it.

actors talk to each other using message-passing, thus your actor implementation does not even have to worry about threads or concurrency – you just express your integration logic. you could use shared memory, but it would not scale well, since you are limited to one JVM; nor would it be natural, since you have to use explicit language constructs; this language support for immutability is very convenient, as i have mentioned earlier

if you use a JMS server to pass messages around, it becomes a sort of a mailbox, holding messages for you in the queue. each incoming message would eventually spawn an instance of the actor, feeding it the message as an argument. multiple instances of the same actor can read from the same queue, thus achieving load-balancing.

once you recall that jms supports -filters- selectors you have the actors implementation that curiously matches something like erlang

note that this is not fine-grained parallelism; your units of work are more coarse-grained and very loosely coupled, but fundamentally, the model is the same, and it scales like crazy achieving massive throughput.

even if you do not end up using BW, you can implement this model by hand relatively easy.

so what if i wanted more fine-grained and more efficient support for actors in my language of choice (provided i am not using erlang)?

ruby

revactor networking library includes actors implementation (also see this great intro to actors by Tony Arciery), but i have not seen a more generic approach yet.

note that ruby is really hampered by lack of proper threading support; this is why jruby guys are in a much better shape if they were to roll their own actors implementation.

scala

this is probably the most mature implementation i’ve seen (see this paper). they take advantage of scala language features to simplify the syntax and unify synchronous and asynchronous message-passing. individual actors are represented as threads or more light-weight primitives that get scheduled to run on threads in the thread pool. it is type-safe, but it relies on convention to make sure you do not mutate your messages.

although i could see where representing actors as threads could be too heavyweight for some tasks, in the case of java and scala, your mileage may vary (see this presentation from Paul Tyme).

groovy

given language features like closures and general simpler syntax, together with the fact that it sits on top of JDK that includes java.util.concurrent, one would imagine that groovy would be a perfect candidate for actors implementation. however, the only thing i found so far was groovy actors, and it seems to have been dormant for a while.

python

i do not know enough about python’s memory model and its implementation, but i suspect is suffers from the same “feature” as ruby – i.e. global interpreter lock, which means that it won’t be able to scale to multiple CPUs (and, similar to ruby, jython that builds on JVM comes to the rescue).

the only thing i’ve looked at so far is stackless python, which is a modified version of python that makes concurrency easier (see this tutorial by Grant Olson that also includes actors). it introduces tasklets aka fibers, channels, and a scheduler among other things.

java

this is where i am a bit surprised – i do not see a good drop-in-a-jar-and-go actors library blessed and used by all. there seems to be some research projects out there, but i want something that works for me now and supports in-memory zero-copy message passing, sync/async messaging, and type safety. i am OK with abiding by conventions instead of compiler checking things for me.

i suspect that the reason for this is the fact that some rudimentary form of actors can be implemented relatively easy using existing concurrency libraries, and this approach is intuitive without putting labels on it.

nevertheless, this is what i found:

  • jetlang is a port of a .NET library and looks at Scala actors for inspiration. it is still quite beta, but it looks promising
  • kilim (from one of the principle engineers of weblogic server) still seems to be a bit too much of a research project for my taste, but the theory behind it is sound

and there is a number of research projects out there:

bottom line

actors is a great abstraction, and “good enough” version of it is easy to implement – think about it, consider it, use it!

it helps if your language/platform supports concurrency primitives to build upon. this includes true threading support that scales to many CPUs, although we could also benefit from a standard fibers implementation, since they are more lightweight than typical threads and would allow creation of a large number of actors that later could be mapped onto threads for execution.

each language could benefit from a well thought-out actors library, since it would push developers in the right direction.

it is not right for everything though – it might not be fine-grained enough, it might not map well to problems that rely on ordering of messages or presence of any other state across multiple actors or multiple messages.

to be continued

what is on the horizon that is worth noting? what are some of the interesting research topics? what have we forgotten over the years? what other heuristics/patterns and libraries could be immediately useful?

concurrency: part 1

Posted by anton
on Friday, September 12, 2008

true to the purpose of this blog, below is an attempt to organize my (admittedly very superficial) experience with concurrency.

my 10GHz CPU

you probably noticed that moore’s law does not really apply anymore when it comes to CPU speed. if it were holding up, we would have had 10GHz CPUs by now, but for half a decade we haven’t really moved past 3GHz.

that is to be expected for the current generation of hardware – the gains have to happen elsewhere. for a little while we’ll get performance boost due to increase in size and speed of the caches that would improve locality, but in the long run it seems that multiple CPUs is where the improvements are to be mined from (this also includes specialized CPUs like Cell and GPUs in general).

this means that more and more people will have to think about their applications in terms of parallel processing. this also means that optimizations will become more and more important for those workloads that cannot be parallelized and therefore will be stuck on a single CPU (for a good introduction see The Free Lunch Is Over at Dr. Dobb’s Journal).

the bottom line is that as an app developer you cannot ignore the problem any longer; to make matter worse, there is no automagical solution in the nearest future that would make your application take advantage of multiple processors.

my concurrency story

in past decade most of the stuff i’ve worked with had some sort of coarse-grained parallelism; the rest was taken care of by the underlying framework.

i started with a unix philosophy of small programs connected via pipes, each performing a simple task. a little later came in fork and signals. things were simple, and OS took care of everything.

then came the web – it was mostly stateless with the database doing all the heavy lifting when it came to shared state. we just added boxes if we needed to grow. in ETL multi-box, multi-cpu setup was also natural, and the tools were designed to conceal concurrency; same goes for integration, where concurrency was at the level of data flows, which made things rather simple.

it is only in the past year or so when i had to really dive in deeper into relatively low-level concurrent development with java.

my dog-eared copy of Java Concurrency in Practice has proved to be quite an indispensable reference. the book is a bit uneven, and the editor should have spent more time on it, but you get used to it. it is a great practical resource, especially in the presence of so much confusing and incomplete information online.

jsr-166 introduced in java5 (and the primary subject of the book) is such a productivity boost; being a part of JDK, it is a big step forward towards letting mere mortals like me really embrace concurrent programming.

i find myself using Executors convenience methods all the time: it is so easy to create a pool, and then just feed it Callable instances, getting a Future instance as a handle in return. if more flexibility is needed, i use ThreadPoolExecutor. Queues are great as a communication channel for any sort of producer/consumer scenario, anything that requires message-passing or any sort of other work hand-off. Atomics are also great – i do not have to think twice when implementing counters or any other simple data structures.

most of the time i do not even have to work with threads or low-level synchronization primitives directly – they are buried deep within the libraries. i have less nightmares, since i do not have to touch volatile as often.

at some point i’ve read both editions of doug lea’s book, but i was always hesitant to recommend it; i’d rather rely on libraries that abstracted all of this away. now that java.util.concurrent has been out for 4 years, and Java Concurrency in Practice has become a bestseller, there are no more excuses.

one thing i’ve learned though – when you think you got this stuff, you discover a whole new class of problems that make you realize how complicated all of this really is, and how truly difficult it is to write larger concurrent programs.

you really, really have to think hard about how you share your objects, how you compose them and operate on them. you need to really understand how the language and the runtime work (i find myself checking JLS quite often). this is where good OO practices like encapsulation become even more important, since you are not just risking maintenance overhead, but you are risking the correctness of your program.

now, i always told myself that programming is not an exercise in manliness. i am just an app developer; i want to ship a working code that solves customer’s problems, not spend countless hours trying to reason through non-blocking algorithms just because i decided to do something non-trivial with ConcurrentHashMap. at the same time i do not want to waste my precious CPUs, so what am i to do? shouldn’t this stuff be easier? is there something I am missing?

threads considered harmful

actually, there is no problem with threads per se; the problem is with shared state.

in a normal sequential program you only worry about the logic as it is unfolding before you – one statement after another, in order. in a concurrent program that uses threads and shared state in addition to all your usual problems you also have problem of the non-deterministic state: since at any point in time any thread can come in and mess with your data, even between the operations you considered atomic before (like counter++), the number of states that your program can be in suffers a combinatorial explosion. this makes it really hard to reason about its correctness.

your code becomes brittle, sacrificing failure isolation – one misbehaving thread can potentially harm the whole runtime (a good analogy is BSOD caused by a device driver).

in addition, things don’t compose – a transfer operation performed by a thread-safe customer between two thread-safe accounts is not going to be automatically thread-safe.

to make matter worse, some of the errors remain hidden when run on commodity 1-2 CPU IA32 hardware, but as the number of CPUs grow, or their architecture becomes less restrictive to help with concurrency, things start to break down.

for more thorough discussion see The Problem With Threads by Edward A. Lee and Cliff Click’s We Don’t Know How To Program…

now what?

a natural reaction is to forget about fine-grained parallelism and offload the hard stuff onto someone else. after all, i am an app programmer, i care about business problems, what’s all of this yak shaving about?!

in some cases we can get away with firing up individual processes to take advantage of multiple CPUs. most of the time though it means that the problem has been pushed further down the stack, which often turns out to be the database. this is the route that rails folks went, and it certainly was pragmatic approach at the time (now that they are forced to deal with efficiency, threading is back in the picture. for discussion of issues see Q/A: What Thread-safe Rails Means).

if you can get away with using individual processes, go for it (see google chrome) – you get failure isolation, you get immutability in respect to other processes (it won’t be as easy for another process to mess with your data), and as an additional benefit, you get to use all the standard tools that the OS has when it comes to managing and troubleshooting processes (as opposed to using often incomplete and idiosyncratic tools for thread management that your runtime platform of choice offers – if any).

still, as we need more and more fine-grained concurrency and as the level of concurrency increases (it is not just a handful of CPUs now, but dozens, and even hundreds), one process per task becomes too expensive (context switching, high costs of creating a new process, memory overhead, etc). so we are back to some sort of lightweight thread-like primitives running within the same process, sharing some common resources.

most of the popular languages/platforms these days provide some sort of threading and shared memory support. but as outlined above, they suffer from some fundamental problems. there are some practical things at various levels of abstractions that that can help: low-level constructs within the language/platform itself, tooling, and higher-level libraries/mini-languages

language

  • make immutability easier – take note of functional languages, but also make it practical. in java case, for instance, it could mean extending immutability to some core data structures (see scala collections) or making it easier to tag an instance as immutable (see ruby’s freeze; this reeks of boilerplate though) – this way errors will be caught at compile time
  • consider sharing data only through explicit, ideally checked at compile-time, means. thus by default nothing is shared, and in order to make something shared you have to explicitly tag it as such. ideally, this would also come with some sort of namespace support, thus limiting mutations to a sandbox (see clojure for reference)
  • make language safer to use when it comes to exposing shareable state (this is when something like static becomes a problem – see Shared Data Considered Harmful for an example that applies to concurrency)

tooling

  • static analysis tools might help, but we need to give them a bit more than just an infinite number of states. findbugs for instance, supports concurrency annotations and something like chord could also be promising. this stuff is complex though and there are limits to static analysis (and i do not even want to bring up formal proofs using process calculi)
  • i want more support from the platform to help me troubleshoot lock contention, deadlocks, cpu-intensive threads, and other concurrency-related infrastructure. sun’s hotspot has some rudimentary stuff in place, but i want more things out of the box (azul claims that they have always-on built-in tools in their product, but i have not played with them)
  • speaking of azul, i need to study them more. although perceived as a boutique solution, they are addressing issues that everyone will be facing in just a few years. seems like they ported sun’s hotspot to their hardware which allowed them to achieve scaling by automatically replacing synchronization with optimistic concurrency which scales much better. incidentally, this truism about optimistic concurrency has been obvious to database folks for decades

libraries/mini-languages

one of the approaches is to focus on your problem domain and come up with a library/language that solves your particular problem and abstracts away concurrency. web frameworks (J2EE, rails), or ETL tools, or even databases are all examples of such approaches.

this is where my interest lies as an app developer – how can i make concurrent programming easier for me, the layman.

the bottom line is that if we insist on using low-level synchronization primitives, it would be really hard to paper over the underlying complexities. right now there is no generic universal approach that will simplify concurrent programming. so at this point a pragmatic programmer is left with patterns, supporting libraries, and heuristics.

to be continued

there are some patterns (for the lack of a better word) that i found to be helpful in dealing with concurrency; there is also some stuff on the horizon that promises all sorts of benefits – is there really a silver bullet? but also there is plenty of stuff that has been with us for decades, and i would be the first one to bow my head in shame, acknowledging my ignorance.