about processes and engines

Archive for the ‘ruby’ Category

Houston.rb ruote presentation by Wes Gamble

During the last Houston.rb meeting, Wes Gamble talked about ruote. Here are his slides :


Wes does an excellent job at presenting the main concepts in ruote, process definitions, participants, workflow vs state management and so on. I’m lucky to have people evangelizing ruote as I’m a too much a “you don’t understand workflow ? You certainly don’t need ruote” type.

There was a funny coincidence : my friend Alain Hoang was visiting Houston at that time and he joined the meeting. Alain and I were co-workers when I started working on ruote. He helped me with the initial Rakefile and tamed rote to build an initial website for the project. He also came up with the initial suggestion for the rufus namespace idea.

Many thanks to Wes for his great presentation (and to Alain for all his help).


Written by John Mettraux

August 16, 2010 at 12:46 am

Posted in ruby, rufus, ruote, workflow

ruote demos and examples

Recently, contributions to ruote have been on the rise.

I’ve listed them on the ruote source page, but here is a small recap :

Torsten is maintaining ruote-on-rails : “A simple example Rails app for demonstrating the usage of ruote in Rails using RuoteKit”.

Eric Dennis has gone a step further with ruote-rails-example, he extended Torsten’s work to add ruote-amqp participants.

David Greaves, in the context of his BOSS project has announced a minimal-ish ruote ruote-amqp setup.

Special kudos as well to David for his ruote overview page in the wiki.

Two other “in the wild” examples of ruote : David Austin‘s qips-wfmgr and Chris Beer‘s fedora-workflow.

Many thanks to all of these people, they are making ruote better.

Written by John Mettraux

June 21, 2010 at 1:43 am

Posted in opensource, ruby, ruote, workflow

ruote 2.1.10 released

ruote is an open source workflow engine implemented in Ruby.

It takes as input process definitions and interprets them. It routes work among participants according to the flow described in those process definitions.

Version 2.1.10 is out and contains lots of small improvements and features requested and developed with the help of various people.

I have also updated/upgraded the various storage libraries around ruote and added to their ranks two new gems.


in ruote

The changelog has all the details, but let’s look at the LocalParticipant module which saw the most improvements. Following suggestions by Oleg and Avishai, I added the on_reply, re_dispatch, reject and put/get methods
to it.

A participant has a consume(workitem) which is called as the participant gets handed a workitem. When dealing with a remote participant, it’s sometimes necessary to have logic alongside the consume method to deal with the workitem when it comes back. That’s the role of the on_reply(workitem) method.

re_dispatch(workitem) forces a re-delivery from the participant expression to the ‘real’ participant.

The reject(workitem) method is about participants that reject a workitem, forcing the worker to let another worker deal with the dispatching to the participant.

LocalParticipant now has a pair put(k, v) / get(k) for stashing data between consume / on_reply. The data is stashed directly in the ParticipantExpression.

around ruote

Since CouchDB 0.11 is out and it allows ?feed=continuous&include_doc=true, I could drop the polling work from ruote-couch and simply keep connected and “react” on incoming changes in messages and schedules. It makes for a robust ruote-couch. I also made sure ruote-couch 2.1.10 can be shared by multiple workers.

This ruote-couch isn’t very fast, but I’ve come to appreciate CouchDB and use it in 2 or 3 projects. I like the comfort of having Couch as a storage for ruote.

I have upgraded ruote-dm as well. It’s a DataMapper backed ruote storage gem. I made sure ruote-dm could be shared by multiple workers (though I still have an issue with one of those special tests, I hope to fix that in the coming weeks).
It seems to work well with the latest DataMapper (1.0).

For the fun, I wrote a ruote-redis gem. As the name implies, it uses Redis as a persistence mechanism for ruote. It’s fast, multi-worker ready and was fun to develop. Not sure if I will use in production, but it can sit as an example at least.
There have been very interesting developments in Redis, this ruote-redis only uses a very limited set of those features. There is probably a better way.

I got interested in Beanstalk, its simplicity and minimal size are seducing. I started working with it as a kind of transient storage, but ended up writing a server/client thing. You can start a ruote-beanstalk storage linked to a beanstalk queue, a server and then start an engine and some workers that will communicate with the real storage via the beanstalk queue. Queue in the middle.

My initial purpose when investigating Beanstalk was to use it to pass workitems between the engine and the ‘real’ participants. I didn’t forget this and ruote-beanstalk comes with participant and receiver implementations.
It could become an alternative to ruote-amqp, it is at least a good example of a participant / queue / receiver set.

Finally, there’s ruote-cukes, a gem that adds extensions/examples for testing ruote process definitions with Cucumber.

next steps

I still have to improve the documentation, there are lots of weak / missing points in it.

The listen expression will be enhanced, I want it to listen (react) to process errors and to flows entering or leaving a given tag (told you).

Lots more things.


Many thanks to Torsten, Ian, Oleg, Avishai, Patrice and Matt. And also many thanks to Wes Gamble (and Sergio) for organizing a ruote BOF at the latest Railsconf.

BTW, David Greaves is writing an overview in the ruote wiki as he’s assembling his build system. Feel free to chime in.

This release is dedicated to my friend Sawada Tomoaki, who left us last friday.


Written by John Mettraux

June 15, 2010 at 6:57 am

Posted in ruby, ruote, workflow

retiring rufus-tokyo

As you probably know, rufus-tokyo is a Ruby FFI wrapper for Tokyo Cabinet|Tyrant, the fine pieces of software delivered by Hirabayashi Mikio.

Rufus-tokyo is 12 or 13 months old, but it’s time to retire it (maintenance mode).

James Edward Gray II is building his Oklahoma Mixer which will, hopefully, completely overlap rufus-tokyo and simply be better, very soon. Once the mixer covers Ruby 1.9.x, JRuby and Tokyo Tyrant (see todo list), rufus-tokyo will be irrelevant.

This is great for me (and for you), we will get a better Ruby library for TC/TT. Please note that for Tokyo Tyrant we already have a very fast option with Flinn Mueller’s ruby-tokyotyrant, which will always be faster than a FFI Tokyo Tyrant library.

James is exposing his motivations for Oklahoma Mixer on its front page. I have a few clarifications to make about them and rufus-tokyo. I use this blog to make these clarifications more accessible.

Why not just use rufus-tokyo?

There is already a Ruby FFI interface to Tokyo Cabinet and more called rufus-tokyo. I am a huge fan of rufus-tokyo and have contributed to that project. I have learned a ton from working with rufus-tokyo and that code was my inspiration for this library.

That said, I did want to change some things that would require big changes in rufus-tokyo. Here are the things I plan to do differently with Oklahoma Mixer:

* Tokyo Cabinet’s B+Tree Database has some neat features, like custom ordering functions, that are hard to expose through rufus-tokyo due to the way it is designed. I would have had to rewrite pretty much that entire section of the library anyway to add these features.
* There are some places where rufus-tokyo uses more memory than it absolutely must or slows itself down a bit with extra iterations of the data. Again, this is a result of how it is designed. It allows more code reuse at the cost of some efficiency. I wanted to improve performance in those areas.
* I’m working on some higher level abstractions for Tokyo Cabinet that I eventually plan to include in this library. These extra tools are the reason I needed to make these changes and additions.
* Finally, rufus-tokyo paved the way to a Ruby-like interface for Tokyo Cabinet. Previous choices were scary in comparison. I wanted to push that movement even further though and get an even more Hash- and File-like interface, where possible.

It’s important to note though that rufus-tokyo does do some things better and it always will. Its advantages are:

* It offers a nice Ruby interface over a raw C extension which is faster than using FFI. I have no intention of duplicating that interface, so rufus-tokyo will remain the right choice for raw speed when using MRI.
* For now, rufus-tokyo is more full featured. It provides interfaces for Tokyo Tyrant and Tokyo Dystopia. I would like to add these eventually, but I’m starting with just Tokyo Cabinet.
* It offers a pure Ruby interface for communicating with Tokyo Tyrant without needing the C libraries installed. I won’t be copying that either, so it will remain the right choice for a no dependency install.
* For now, it’s more mature. A lot of developers have used it and contributed to it. It’s probably the safer library to trust for production applications at this time.

The “raw C extension” mentioned is provided by Hirabayashi-san. Rufus-tokyo only provides a (mostly) unified interface to it (same interface for FFI and the C exts).

The “pure ruby interface for communicating with Tokyo Tyrant” again is provided by Hirabayashi-san.

That doesn’t leave me much merit. All this wrapping effort was code-named Rufus-Edo and you can read more about the motivations and mechanisms in ‘rufus-tokyo 0.1.9 (Rufus::Edo)’. You will also find more details in my Edo Cabinet presentation.

So I’d advise TC/TT Rubyists to support the effort of James and Flinn. For those of you who want to participate in the larger Tokyo Cabinet|Tyrant community, Flinn has set up the Tokyo Cabinet Users mailing list, where lots of people help and share.

Rufus-tokyo is now in maintenance mode. I will still help people with the occasional bug or mem leak. The mailing list is still open (it’s the one for my rufus libraries) and if you know how to write an issue report, you will always find help (random tweets do not count).

Thanks for the fun on the way.


Written by John Mettraux

February 26, 2010 at 12:25 am

ruote and decision tables

ruote 2.1.7 is out and it felt like it was time to update the CsvParticipant that could be found in ruote 0.9.

Ruote has been relying on rufus-decision for its decision table needs and ruote 0.9 was integrating a CsvParticipant directly. For ruote 2.1, the DecisionParticipant is left in the rufus-decision gem.

It’s not the first time I write something about decision tables. But I noticed that I hadn’t really written anything about how to mix ruote processes and decision tables.

Let’s consider the decision table in the adjacent figure.

Given two input values “type of visit” and “participant physician ?” the level of “reimbursement” is decided.

The CSV version of this decision table is available online.

Clearly this decision has to take place after the customer filled his reimbursement claim and it got reviewed by a clerk in the organization.

Up until now, applying the reimbursement rules was part of the tasks of the clerk (with probably some downstream checks), and our goal is to remove some of that burden.

Our process definition would look like :

Ruote.process_definition :name => 'reimbursement', :revision => '1.0' do
  cursor do
    customer :task => 'fill form'
    clerk :task => 'control form'
    rewind :if => '${f:missing_info}'
    finance :task => 'reimburse customer'
    customer :task => 'reimbursement notification'

There are four participants involved in that process definition. Customer, Clerk and Finance our implemented with classical inbox/worklist participants : workitems for them get placed in their workqueue. (With ruote 2.1 I tend to use StorageParticipant for that or I emit the workitem via a ruote-amqp participant towards the target worklist or task manager).

“rewind” isn’t a participant, it’s a ruote expression working with the cursor expression.

That leaves us with the participant “determine_reimbursement_level” to implement.

The straightforward Ruby implementation of the participant would leverage a BlockParticipant and look like :

engine.register_participant 'determine_reimbursement_level' do |workitem|
  workitem.fields['reimbursement'] =
    case workitem.fields['type of visit']
      when 'doctor office'
        workitem.fields['participating physician ?'] == 'yes' ? '90%' : '50%'
      when 'hospital visit'
        workitem.fields['participating physician ?'] == 'yes' ? '0%' : '80%'
      when 'lab visit'
        workitem.fields['participating physician ?'] == 'yes' ? '0%' : '70%'

Given the values in the fields ‘type of visit’ and ‘participating physician ?’ this participant sets the value of the field ‘reimbursement’.

All is well, but you may have noticed that the decision table is rather simplistic. And how does it cope with change ? What will it look like when the rules get hairy ? Will the Ruby developer have to intervene for each change ?

(it might not be a bad thing to rely on the Ruby developer, they tend to test carefully their creations)

Those rules are produced by ‘business users’ and one of their favourite tools is the spreadsheet. Rufus-decision can read CSV output of decision tables. The decision participant would look like :

require 'rufus/decision/participant'
  # gem install rufus-decision

  :table => {
in:type of visit,in:participating physician ?,out:reimbursement
doctor office,yes,90%
doctor office,no,50%
hospital visit,yes,0%
hospital visit,no,80%
lab visit,yes,0%
lab visit,no,70%

Well, OK, that’s nice, but… We still have to restart the application at each change to the decision table or at least re-register the participant with the updated table. Tedious.

If our business people publish the decision table as CSV somewhere over the intranet (yes, I know, it’s an old-fashioned word), the decision participant can be simplified to :

require 'rufus/decision/participant'
  # gem install rufus-decision

  :table => '')

Each time a decision will have to be taken, the table will be read. This lets business users modify the rules at will.

The whole example is packaged as reimbursement.rb. It runs the process from the command line and grabs the decision table online. It shouldn’t be too difficult to take inspiration from it and integrate decision tables in web applications (like ruote-kit or barley).

There are many open possibilities when combining a workflow engine and a decision mechanism. One immediate advantage is that they can evolve with different rhythms. They can also be used in isolation : the clerk in our process could talk the claim over the phone with the customer and run blank decisions to help the customer with incomplete forms.

Of course, you can use rufus-decision without ruote, it’s a plain ruby gem. – the source code of rufus-decision – the source code of ruote – ruote’s documentation – the mailing list for ruote and rufus-decision


Written by John Mettraux

February 17, 2010 at 4:32 am

Posted in bpm, bpms, decision, ruby, rufus, rules, ruote

it’s barley a task manager

In the comments of the previous post, Karl asked me about ruote : “how to resume execution of the workflow from a web application ?”.

Though Karl’s questions was more about receivers, I took the time to explore a sample ruote application.

Barley is a [for fun] task manager. It takes its name from the barely repeatable processes, of course, it can’t compare with the excellent Thingamy work processor.

Barley is a Sinatra web application wrapping a ruote workflow engine. Ruote can run many process instances issued from different process definitions, but barley’s processes are all instantiated from a single process definition.

The cursor expression does most of the work. It runs in sequence a ‘trace’ participant and a ‘next’ participant. Then if the field ‘next’ is set in the workitem (most likely it has been chosen by the ‘current’ participant), the cursor is rewound and the task ends up on the ‘next’ participant’s desk.

PDEF = Ruote.process_definition :name => 'barley' do
  cursor do
    participant '${f:next}'
    rewind :if => '${f:next}'

If the ‘next’ field is empty, the cursor will be over, letting the process terminate.

Barley simply lets users push tasks around. There is no explicit “delegation”, “escalation” or whatever…

For the rest, look at the barley itself, it’s all in one single file, a workflow engine, two participants and the webservice, with the haml template at the bottom.

Torsten has deployed a demo instance at (he also has a version hooked to twitter auth).

You can install it and run it locally with :

  curl -o barley.rb
  gem install sinatra haml json ruote
  ruby barley.rb

Then head to


to remove ruote and its dependencies from your gems :

  gem uninstall ruote rufus-json rufus-cloche rufus-dollar rufus-lru rufus-mnemo rufus-scheduler rufus-treechecker


Written by John Mettraux

January 29, 2010 at 8:22 am

Posted in ruby, ruote, workflow

ruote 2.1 released

Just released ruote 2.1.1. That requires a long, boring, explanatory blog post, especially since I was promising a ruote 2.0.0 release and now I’m jumping to 2.1.1.

My prose isn’t that great, the transitions from one paragraph to the next are often poor. Let’s try anyway.

Ruote is an open source ruby workflow engine. Workflow as in “business process”. Sorry, no photo workflow or beats per minute.

0.9, 2.0, 2.1

In May 2009, I had the feeling that ruote 0.9 was a bit swollen and very baroque in some aspects, some early design decisions had forced me to do some juggling and it felt painful. At that point, I was working with other languages than Ruby and I had written the beginning of a ruote clone in Lua.

Fresh ideas are tempting, after a while I got back to Ruby and started a ruote 2.0 rewrite.

I was fortunate enough to benefit from the feedback of a startup company that pushed ruote in a terrain that it was not meant to travel. Whereas I built ruote and the systems that came before for small organizations and trusted integrators to divide the work among multiple engines, these people started stuffing as much possible work into one engine, with extra wide concurrent steps.

Those experiments and deployments brought back invaluable learnings, matured the 2.0 line very quickly, but, in October, I was seriously tempted to venture into a second rewriting.

Ruote 2.0 was a complete rewrite, ruote 2.1 is only a rewrite of the core. The test suite is mostly the same and working from the 2.0 test suite allowed me to build a 2.1 very quickly.

things learnt

So what triggered the 2.1 rewrite ?

At first, a feeling of unease with the way certain things were implemented in 2.0. Some of them were inherited from the previous ruote 0.9, while others came from the rewrite, in both cases, I made bad decisions.

Second, those load reports. The 2.0 workqueue is only a refinement of the 0.9 one. The workqueue is where all the tasks for the expressions that make up the process instances are gathered. It’s a transient workqueue, processed very quickly, internal to the engine.

Ruote, being a workflow engine, is meant to coordinate work among participants. Handing work to a participant occurs when the process execution hits a ‘participant expression’. This usually implies some IO operations since the ‘real’ participant is something like a worklist (a workitem table in some DB), a process listening somewhere else and reachable via a message queue (XMPP, AMQP, Stomp, …).

As IO operations depend on things outside of the engine, the dispatch operation was usually done in a new thread, so that, meanwhile, the engine’s workqueue could go on with its work (and the other process instances).

When farming out work to ‘real’ participants with wide concurrency, as the engine is spawning a thread for each dispatch, it gets crowded. The engine got vulnerable at this point. Right in the middle of dispatching, with still lots of tasks in the transient workqueue, the best time for things to go wrong, the engine to crash, and processes to stall.

Third ? Is there a third ? Ah yes, I started playing with CouchDB (Thanks Kenneth), and many of the ideas in it are seducing and inspiring.


My passion for workflow engines comes from my [naive] hopes of reducing human labour, reducing total work by reducing/removing the coordination chores, the friction. Tasks left should be automated or left to humans. Left to humans… the bottleneck is in the human participants… Then there is this half-conviction that a workflow engine doesn’t need to be that fast, since most of the time, it sits waiting for the human-powered services to reply (or to time out).

Also I had settled on Ruby, a beautiful language, with not so fast interpreters (though this is changing, on many fronts, with so many people liking Ruby and giving lots of their times to make it better). The value of a workflow engine lies at its edge, where coordination meets execution, in the participants. That’s where Ruby shines (once more), since it makes it easy to write custom participants very quickly and to leverage all the crazy gems out there.

(I say Ruby shines once more, because I think it shines as well as the implementation language for the engine itself. It feels like the boundary between pseudo-code and code vanished)

Slow people, slow me, slow rubies.

storage and workers

I mentioned a few lines before CouchDB, that database made “out of the web”. Writing things to the disk is a necessity for a workflow engine. It’s an interpreter, but its processes are meant to last (longer than the uptime of a CPU).

One of CouchDB strengths comes from its eventual consistency and it made me wonder if, for ruote, it shouldn’t be worth sacrificing a bit of short-term performance for medium-term scalability / robustness.

A frequent complaint about Ruby is its global interpreter lock and its green threads, making it not so sexy in a 2+ core world. Threads are easy to play with from Ruby, but they’re quite expensive (some patches address that) and if you want to get past of the global lock, there seems to be only JRuby now.

Workqueue, queue ? Worker. Why not multiple workers ? One [Ruby] process per worker, that would be ideal ! Let them share a storage for Expressions, Errors, Schedules, Configuration and… Messages.

Ruote 2.1 has a persisted workqueue. For the items placed in the queue, I use the term “message”, it covers “tasks” (sent for execution by expressions) and “events” (intra-engine notifications).

An interesting consequence : a persisted workqueue can be shared by multiple workers, workers not in the same Ruby process.

A message ordering the launch of a ruote process looks like :

  {"type": "msgs",
   "_id": "1984-2151883888-1261900864.59903",
   "_rev": 0,
   "action": "launch",
   "wfid": "20091227-fujemepo",
     ["define", {"name": "test"}, [
       ["sequence", {}, [
         ["participant", {"ref": "alice"}, []],
         ["participant", {"ref": "bob"}, []]]]]],
   "workitem": {"fields": {}},
   "variables": {},
   "put_at": "2009/12/27 08:01:04.599054 UTC"}

Note the process definition tree passed in to the ‘tree’ part of the message, the empty workitem, the freshly minted process instance id in ‘wfid’, and yes, this is JSON. The process, named ‘test’ is simply passing work from alice to bob (in a sequence).

What about the worker that has to handle this message ? It would like :

  require 'ruote'
  require 'ruote/storage/fs_storage'

  storage ='ruote_work')
  worker =
    # current thread is now running worker

Whereas the 2.0 workqueue was transient, 2.1 has an external workqueue exploitable by 1+ workers. Simply set the workers to share the same storage. 1+ workers collaborating for the execution of business process instances.

Apart from tasks that need to be handled ASAP, some tasks are scheduled for later handling. A workflow engine thus needs some scheduling capabilities, there are activity timeouts, deliberate sleep period, recurring tasks… The 0.9 and 2.0 engines were using rufus-scheduler for this. The scheduler was living in its own thread. The 2.1 worker integrates the scheduler in the same thread that handles the messages.

(Since I wrote rufus-scheduler (with the help of many many people), it was easy for me to write/integrate a scheduler specifically for/in ruote)

A storage is meant as a pivot between workers, containing all the run data of an engine, ‘engine’ as a system (a couple storage + worker at its smallest). Workers collaborate to get things done, each task (message) is handled by only one worker and expression implementations are collision resilient.

The result is a system where each expression is an actor. When a message comes, the target expression is re-hydrated from the storage by the worker and is handed the message. The expression emits further messages and is then de-hydrated or discarded.

This the code of the worker is rather small, there’s nothing fancy there.

There will be some daemon-packaging for the worker, thanks to daemon-kit.

storage, workers and engine

We have a storage and workers, but what about the engine itself ? Is there still a need for an Engine class as an entry point ?

I think that yes, if only for making 2.1 familiar to people used to Ruote pre-2.1. At some point, I wanted to rename the Engine class of 0.9 / 2.0 as ‘Dashboard’, and forcing the engine term to designate the storage + workers + dashboards system. But well, let’s stick with ‘Engine’.

Throughout the versions the engine class shrinked. There is now no execution power in it, just the methods needed to start work, cancel work and query about work.

Here is an engine, directly tapping a storage :

  require 'ruote'
  require 'ruote/storage/fs_storage'

  storage ='ruote_work')
  engine =
  puts "currently running process instances :"
  engine.processes.each do |ps|
    puts "- #{ps.wfid}"

No worker here, it’s elsewhere. That may prove useful when wiring the engine into, say, a web application. The workers may be working in dedicated processes while the web application is given the commands, by instantiating and using an Engine.

Launching a process looks like :

  pdef = Ruote.process_definition :name => 'test' do
    concurrence do

  p pdef
    # ["define", {}, [
    #   ["concurrence", {}, [
    #     ["alpha", {}, []],
    #     ["bravo", {}, []]]]]]

  wfid = engine.launch(pdef)

The execution will be done by the workers wired to the storage, somewhere else.

It’s OK to wrap a worker within an engine :

  require 'ruote'
  require 'ruote/storage/fs_storage'

  storage ='ruote_work')
  worker =
  engine =
    # the engine start the worker in its own thread
  puts "currently running process instances :"
  engine.processes.each do |ps|
    puts "- #{ps.wfid}"

This is the pattern for a traditional, one-worker ruote engine. With the exception that it’s OK to add further workers, later.

In fact, as long as there is one worker and it can reach the storage, the process instances will resume.

storage implementations

There are currently 3 storage implementations.

The first one is memory-only, transient. It’s meant for testing or for temporary workflow engines.

The second one is file based. It’s using rufus-cloche. All the workflow information is stored in a three level JSON file hierarchy. Workers on the same system can easily share the same storage. It uses file locks to prevent overlap.

The third one is Apache CouchDB based, it’s quite slow, but we’re hoping that optimizations on the client (worker) side like using em-http-request will make it faster. Speed is decent anyway and, on the plus side, you get an engine that spans multiple host and futon as a low-level tool for tinkering with the engine data.

Those three initial storage implementations are quite naive. Future, mixed implemenation storages are possible. Why not a storage where CouchDB is used for expression storing while messages are dealt with RabbitMQ ? (Note the doubly underlying Erlang here)

dispatch threads

As mentioned previously, the default ruote implementation, when it comes to dispatching workitems to participants, will spawn a new thread each time.

People complained about that, and asked me about putting an upper limit to the number of such threads. I haven’t put that limit, but I made some provisions for an implementation of it.

There is in ruote 2.1 a dedicated service for the dispatching, it’s called dispatch_pool. For now it only understands new thread / no new thread and has no upper limit. Integrators can now simply provide their own implementation of that service.

Note that already since 0.9, a participant implementation is consulted at dispatch time. If it replies to the method do_not_thread and answers ‘true’, the dispatching will occur in the work thread and not in a new thread.

2.1 participants

Speaking of participants, there is a catch with ruote 2.1. Participants are now divided into two categories. I could say stateless versus stateful, but that would bring confusion with state machines or whatever. Perhaps, something like “by class” vs “by instance” is better. By class participants are instantiated at each dispatch and are thus manageable by any worker, while by instance participants are already instantiated and are thus only manageable via the engine (the Engine instance where they were registered).

BlockParticipants fall into the by instance category.

  engine.register_participant 'total' do |workitem|
    t = workitem.fields['articles'].inject(0) do |art|
      t += art['price'] * art['count']
    workitem.fields['total'] = t

will thus only be handled by the worker associated with the engine.

A by class participant is registered via code like :

    'alpha', Ruote::Amqp::AmqpParticipant, :queue => 'ruote')

Whatever worker gets the task of delivering a workitem to participant alpha will be able to do it.

from listeners to receivers

Ruote has always had the distincition between participants that replied by themselves to the engine (like BlockParticipant) and other participants which only focus on how to deliver the workitem to the real (remote) participant.

For those participants, the work is over when the workitem has been successfully dispatched to the ‘real’ participant. The reply channel is not handled by those participants but by what used to be called ‘listeners’. For example, the AmqpParticipant had to be paired with an AmqpListener.

As on the mailing list and in the IRC (#ruote) channel, I saw lots of confusion between listeners and the listen expression. That made me decide to rename listenerers as ‘receivers’.

A receiver could also get packaged as a daemon.

I still need to decide if a receiver should be able to “receive” ‘launch’ or ‘cancel’ messages (orders). Oh, and the engine is a receiver.


I released ruote 2.1.1 at the gemcutter.

For some people, a gem is a necessity, while for others a tag in the source is a sufficient release.

I hope that by releasing a gem from time to time and tagging the repo appropriately I might satisfy a wide range of people.

I’m planning to release gems quite often, especially in the beginning.

The unfortunate thing with a workflow engine, is that many business process instances outlive the version they started at and it’s not that easy to stay backward compatible all along… That explains some of my reticence about releasing.


ruote-kit is an engine + worklist instance of ruote accessible over HTTP (speaking JSON and HTML), it has a RESTful ideal.

It’s being developed by Kenneth, with the help of Torsten. My next step is to help them upgrade ruote-kit from ruote 2.0 to 2.1. It should not be that difficult. Apart from the engine initialization steps, the rest of ruote being quite similar from 2.0 to 2.1.


Two major versions of ruote for 2009, that’s a lot. I hope this post will help understand why I went for a second rewrite and how ruote matured and became, hopefully, less ugly.

Happy new year 2010 !


source :
mailing-list :
irc : #ruote


Written by John Mettraux

December 31, 2009 at 9:57 am

rufus-tokyo 1.0.4

東京 from its bay

This release contains a fix for a memory leak cornered by Jeremy Hinegardner and James Edward Gray II.

The gem is available via gemcutter and the source is on github.

Many thanks to James and Jeremy and merry Christmas to you all !


Written by John Mettraux

December 25, 2009 at 12:40 am

Ruedas y Cervezas, first instance

“ruote” is the Italian for “wheels”. It’s the name of our ruby workflow engine. The name was proposed by Tomaso Tosolini, it stuck and we liked how people pronounced it “route” which is a good fit for a tool that routes work among participants.

If you translate “ruote” to Spanish you get “ruedas”. Add a thirsty community and you get “ruedas y cervezas”.

This event was organized and hosted by, a madrilenian company doing wonders with Java, JRuby, Scala and their blend of agile magic.

The meeting was divided in three parts. First, presented their work with ruote. They’ve bound it into their “Blue Mountain ERP” to drive the necessary business processes. Workitems are communicated to participants over XMPP. The platform is Java and the languages are JRuby and Scala. As you might have guessed, they run ruote on JRuby.

Then, Iván Martínez, student in the Universidad Politécnica de Madrid presented his work on a graphical process editor for ruote. It’s based on Flex and looks very promising. The guys encouraged him to share his code on, I hope he will follow this suggestion.

For the part when people really get thirsty, I cooked up a bunch of slides about ruote 2.0. They were presented by Nando Sola. I have embedded them here. The first half of them is about why ruote 2.0, while the latter half is about the enhancements to the ruote process definition language.

I felt it was necessary to communicate a bit about ruote 2.0, especially since, in Madrid, people have tremendously contributed to ruote 0.9 (many thanks guys !). Ruote 2.0 is a small jump from 0.9, literally a jump, not a step, explanations are required.

Then beer ensued. A ruedas y cervezas (rc2) is planned for next month, I wish I could attend. So if you’re in Madrid and you’re interested in ruote or, less narrowly, in Ruby / JRuby / Scala / Java, contact Nando Sola and join the and UPM guys for the rc2.



Written by John Mettraux

October 15, 2009 at 10:35 am

Posted in bpm, ruby, ruote, workflow

rufus-lua 1.1.0

luaJust released rufus-lua 1.1.0. The original post I wrote about rufus-lua is named ruby to lua.

This release strongly benefited from Scott Persinger’s work on Laminate, a tool for safe user-authored templates for Vodpod.

Scott needed strong support for callbacks from Lua to Ruby and also ways to pass values back and forth. Many thanks to Scott for the great collaboration.

Those of you interested in Lua and web applications should have a look at Norman Clarke’s lua-haml and at Daniele Alessandri’s mercury (sinatra-like web framework for Lua).


Written by John Mettraux

September 30, 2009 at 5:04 am

Posted in lua, ruby, rufus