So I just came back from the sunny Barcelona yesterday, and I have to admint, Barcelona is az amazingly beautyful city. But more importantly Velocity is an amazing conference. This post will be a short recap from my point of view grouping my key takeaways from that conference.

Open source tools:

So my favourite topic, I’ve heard great talks about tools we already know and love, like Etsy’s very own nagios-herald, and some new ones i didn’t really hear about yet, but was totally amazed by, let this be just a short list, and check them out yourself:

Commercial tools:

I’ve seen some pretty awesome commercial tools, which were new for me and blew my mind as well. Let’s progress with a list which one is not odered, because i can’t decide order based on awesomeness.

  • Ruxit What Ruxit does is the following: It has an agent on your box, and monitors your system, this is really common for a monitoring system, right ? :) But the mindblowing little added value comes here: They call it contextual alerting. So, it creates correlations between incidents, so when you get hit by an alert you can see what was the root cause, what was the problem that originated from that roout cause, and the other little failing checks that originate from the same root as well. Saves lots of time for on call guys. Awesomesauce, right?
  • Cedexis Cedexis provides free R.U.M solution, and a paid Load balancing service. What Cedexis does, is they collect the network data in they free RUM tier, so they have a global map of the network traffic. And based on that, believe it or not, they provide the best routing between your service and the enduser. DNS load balancing based on global metrics, well this is my reaction:

image

Technologies:

We’ve heard lots of great talks around various technological topics, covering containers, anomaly detection, an particularly good talk by Google’s very own Ilya Grigorik about optimizing TLS performance. Just WOW.

Theoretical stuff:

As I mentioned before, we had great talks in the topic of anomaly detection, and oh boy, fiddling with time series comes with glorious amount of deep deep math, the talk by Arun Kejariwal was heavy on math, but I enjoyed every moment of it. The other anomaly detection talk was by the guy, who is highly recognizable by his ever changing facial hair, Theo Schlossnagle, the summary of his talk can be found here. We also heard pretty good amount of talks around microservices, deployment, and uptime monitoring.

Linux Containers

Linux containers are a really hot topic nowadays, and it’s totally understandable why. It’s the ops equivalent of the microservice oriented code. And as so, it’s meant to serve as a backbone of a microservice oriented infrastructure. There was a presentation, which one ripped apart the real meaning of linux containers, how they really work, and what docker, or LXC actually abstract away from the enduser. Slides The other great tutorial in this topic was about Core OS and Kubernetes.Material at Github

Summary

To summarize, I learned a lot, networked a lot, met lots of nice people, and enjoyed the show maximally. I’m really thinking of going back next year as well.

This post is just my personal opinion, I don’t want to tell You , to finish or don’t finish university, it’s up to You to decide. Thanks.

So, it’s official now, I’m an university dropout. And actually I don’t really mind it.

Why?

  • You don’t have to have a CS degree of any level to be a great engineer.
  • Mathematics was lectured on a high standard, it was cool like this, but sadly, you can’t say the same about programming at all. We only had programming courses, because we had to have them (you know, CS faculty.).
  • lots of the material (approx. around 90% in Hungary, on ELTE) is basically useless to someone, who has seen some code before university. I’ve picked up that 10% of the material, that I’ve found useful, so I’ve wasted no time
  • Instead of the 90% useless material I can learn 90% useful stuff. For example Go, I’m into learning Go right now.
  • Since i work at my current place, I took 2 passive semesters, because I wanted to focus on the things i do there. I like to work on problems, and learn useful stuff a lot more, than to sit at a course and listen to a class about “How to write a class in Java” 4 hours long (it’s a true story actually)
  • I don’t want to state, that you don’t need deep theoretical knowledge to be a great software engineer, but you can learn the needed material on your own, maybe it will be a bit harder, but it can be done. Have you ever heard of coursera? I’m pretty sure you all did.
  • Experience is worth more than the diploma

Basically I had to make a choice, go to night course after work (giving up work was not even on my mind, I love it way too much), and give up on every spare time, pet project, social activity. Well you see my point,I think. So that was it.

As an Infrastructure Engineer lately I’ve the lovely task to develop new, or select existing Open Source tools which fit our needs at Ustream, and even extend them to fit our needs better. Such tools have a massive potential to improve developer productivity in many ways, and I’m a developer, I want to be productive, and I want to help other devs to be more productive too. This is why I love it so much. I remember how astonished I was, when I used for example Vagrant for the first time. My reaction was an immediate WOW. It was a really cool experience, to be honest, not to speak of more “visually intense” tools like Graphite. After that , I went to Monitorama, and I still refer to it, as the most inspiring conference I ever visited. I fell in love with tools there irreversibly. Now more than 1,5 years have passed since then, and now I can proudly tell, I’ve got my (at least partially) own tools.

The first ever tool i had the chance to work on was Errbit, which is an error logging and aggregation service, like airbrake.io . I really enjoyed working on it, and working with it, because it solved a real problem, logging errors can be hard.

Then came GitLab, which is a super awesome self hosted git repository management thingy, something like Github enterprise , just for free.

And then the totally own stuff : Openduty which is the result of the first ever Ustream Hackathon (see previous post).

After that I even had the chance to contribute to some awesome piece of code written written by one of my best friends working at Prezi called Changelog

And I’m still working on great stuff inside and outside the office. So yes, tools are awesome, and make life more awesome too, if they are used well. Automation makes life better, so:

image

There was a the first ever Hackathon held at Ustream a good month ago, this is just a short summary about my experiences, and how fun it was.

Long story short, i led a team named Call of Duty and we reached the 4. place out of 12 teams, so it was a pretty decent achievment, i’ve got to say.

The story of the Hackathon, you have an idea, you collect your team, and you get one day to make a working protoype based on your idea. We wrote an alerting and incident escalation application, much like Pagerduty, and it is free, so the name DutyFree was absolutely trivial.

Soon you’ll find the source code on Ustream’s github account, we just got to clean it up a little. Here is a good photo taken at 4 a.m of the team.

Előadtam 2014.03.25. este a Balabitnél tartott PHP Meetupon a hibákról, Airbrake.io-ról, Errbitről, és arról, hogy a Ustreamnél mit csinálunk a hibáinkkal.. Aki lemaradt, és átélné az élményt, ahogy prezentálok, az most megteheti. Recorded:


Video streaming by Ustream

A prezentációm:

So after 4 years i decided to move from Byethost(i also deleted my EC2 instance at the same time btw.) to Digital Ocean. I also decided to give up wordpress and give Octopress a try. It was easy to set up, and finally i have a nice static super fast blog set up. I was getting tired of having a box on EC2 to do stuff, and have another one to host my blog, now it’s all in one place and feels much better. And thanks for DigitalOcean for their awesome pricing, which is more than affordable.YAY.

Errbit and it’s hosted counterpart Airbrake are great tools to capture and track your Application’s Exceptions. The only problem is, the interface they provide is a restful API in which one you can post errors in a nice XML format. The only problem with this is http, or if I go deeper tcp. If your app has  a really bad time , and gets flooded with exceptions (for example in case of cascading errors  eg.: a database outage) your application could get into serious locks if you try to send all the exceptions directly. Yes of course you could use a queue to store exceptions, and process them asynchronously, or you could just do the whole thing via udp.

That’s why i started to hack on my pet project, really creatively named “Err-proxy” , which one is really an error proxy listening on UDP for error messages and forwarding them on the regular HTTP way to the Airbrake or Errbit server. It’s written in node.js (Yes, I’m coding in node, no thank you ,I feel all fine, it’s just the right tool for the problem), and inspired by statsd a lot. It’ll be in working state in a few days, and i’ll share the github repo here(It’s a private repository at the moment). Here it is: https://github.com/ustream/Errbit-proxy

So Monitorama.eu (my first ever Monitorama) just ended and i just can’t describe how cool it was.

BTW Berlin is a really wonderful and amazing city. You’ve gotta love the vibrant scene there, with lots of historical parts worth to mention.

So the talks, and the people i met on the first day were cool, but the workshops on the second day. Man, i mean really, it was the icing on the cake.

Collectd, Riemann, Dashing, Graphite scaling, and last but not least a little about Descartes. But my favourite one was the talk of Abe Stanway about Kale. My mind just blew up with the ideas how many things become possible and less sucky with an automated anomaly detection system.  But i liked the Dashing talk also as much as this one, so i can’t decide. I’ve got a love about pretty dashboards and automation, get over it.

It was good to be there, i really would like to come next year also .

I participate on  a 48 h Game Development competition, and i’ll be streaming it live.

i came up with this : BeatTheBeat, i’ll opensource it , once i have the time to make the code cleaner

There is a great project called Team Dashboards by Frederik Dietz ,and i’ve been experimenting with it a little. Team Dashboards uses Thin as its webserver of choice, but thin with one worker is way too slow for my taste for this project.

I wanted to test how Unicorn performs with one single worker . Just a little tinkering with the Gemfile thanks to the unicorn-rails gem and i was ready to go.

Performance test have been run on my MBP .

Performance with Thin, one worker process:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
deathowl:~/ (master✗) $ ab -n 100 -c 10 -r [http://127.0.0.1:3000/](http://127.0.0.1:3000/) [20:57:12]
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, [http://www.zeustech.net/](http://www.zeustech.net/)
Licensed to The Apache Software Foundation, [http://www.apache.org/](http://www.apache.org/)

Benchmarking 127.0.0.1 (be patient)...Send request failed!
Send request failed!
..done

Server Software: thin
Server Hostname: 127.0.0.1
Server Port: 3000

Document Path: /
Document Length: 31424 bytes

Concurrency Level: 10
Time taken for tests: 131.099 seconds
Complete requests: 100
Failed requests: 3
(Connect: 0, Receive: 1, Length: 2, Exceptions: 0)
Write errors: 2
Total transferred: 3176786 bytes
HTML transferred: 3118586 bytes
Requests per second: 0.76 [#/sec](https://github.com/fdietz/team_dashboard/pull/mean)
Time per request: 13109.925 [ms](https://github.com/fdietz/team_dashboard/pull/mean)
Time per request: 1310.993 [ms](https://github.com/fdietz/team_dashboard/pull/mean,%20across%20all%20concurrent%20requests)
Transfer rate: 23.66 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 10123 4009.8 11487 19334
Processing: 0 2586 1574.8 2583 5405
Waiting: 0 351 1171.7 0 4302
Total: 1439 12709 3625.7 13546 19334

Percentage of the requests served within a certain time (ms)
50% 13546
66% 14208
75% 15166
80% 15306
90% 16129
95% 17240
98% 17916
99% 19334
100% 19334 (longest request)

Performance results with Unicorn ( one worker process)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
deathowl:~/ (master✗) $ ab -n 100 -c 10 -r http://127.0.0.1:3000/ [20:56:50]
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)...Send request failed!
Send request failed!
Send request failed!
Send request failed!
Send request failed!
Send request failed!
..done
Server Software:
Server Hostname: 127.0.0.1
Server Port: 3000

Document Path: /
Document Length: 31424 bytes

Concurrency Level: 10
Time taken for tests: 10.020 seconds
Complete requests: 100
Failed requests: 5
(Connect: 0, Receive: 1, Length: 4, Exceptions: 0)
Write errors: 6
Total transferred: 3051028 bytes
HTML transferred: 2992424 bytes
Requests per second: 9.98 [#/sec] (mean)
Time per request: 1002.014 [ms] (mean)
Time per request: 100.201 [ms] (mean, across all concurrent requests)
Transfer rate: 297.35 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 854 272.7 887 2116
Processing: 0 114 469.4 0 2116
Waiting: 0 116 469.1 0 2115
Total: 466 968 313.7 890 2116

Percentage of the requests served within a certain time (ms)
50% 890
66% 910
75% 917
80% 933
90% 1110
95% 2116
98% 2116
99% 2116
100% 2116 (longest request)

As you can see Unicorn performed in this case a lot better than Thin.