Promote Your Blog Here

Advertisements

My New Favorite: Fluent Interfaces

I have always been a big fan of the fluent programming style. Lately, I have been using it a lot. For instance, today I built an api to interact with VMWare where you can add new linux or windows machines and configure them fluently. In this post, I am going over an example that I built. The sample is a City Planner city builder. You can add streets and homes to your city. This is the results of the api that we will create.

Let’s start with our city planner. There isn’t much here other than the methods to add a street or a home.

To get started, we need to add some extension methods to start building out our api. Let’s create an extension that can create a street.

This is a standard extension method that hangs of of the City Planner. The one difference that makes all of this work is the return type. The return type is a CityPlanner. If you look back at the methods of the CityPlanner object, you can now call the method again since you have the instance of the Planner that you are working with.

Building a city street is easy ( on a computer), but building a home is a little more challenging. To build that we need to specify how many floors and windows, among other things. Sounds like a candidate for another builder. You can see on the House Planner, there is a method to build the house. One of the parameters is an anonymous function that takes a HouseBulder object and returns a Home. This will allow us to incrementally build our house with the same fluent style.

If you look at the HomeBuilder class, you can see that all of the methods return itself allow you to continue the method chaining.

In the end, we can print out our blueprint for our city.

Welcome to my city!
Streets
	Main Street
	1st Street
Homes
	Home: Floors=3, Windows=0

I enjoy this kind of expressive programming.  I think it is easy on the eyes.  Cheers!

Kafka, Sadly Its Time To Part Ways

I had big dreams for the perfect union between my company and Kafka.  I could see jagigabytes (technical term for a huge number) upon jagigabytes of data passing through our network from the massive clickstream data that we would produce.  The power of having our data in-house and not relying on the paid services to store and cull our data was huge.

That was the dream; and now for the reality :(.  We have tried to bend the will of Kafka to meet our use case but Kafka didn’t break.  I wanted badly for the pub/sub application to be able to work at our small scale.  When I say our scale, I mean somewhere south of 1000 messages per day for business transactions purposes.

My thinking was that if we could get it to work at our scale, then we would have learned a great deal to help us with my grander vision.  I can say that I achieved the goal of learning, but not much more.

The first issue that we had was the messages were not returned from the queue during a single fetch request.  I saw that during development, but I didn’t pay enough attention to what I was seeing.  That turned out to be a fatal flaw.

We were losing messages

When we configured our jobs to read from various topics, we configured them to poll at specific intervals.   When we spaced them out to an hour or greater, we were closing the window between the retention policy and the opportunities to read data.  For example, if we have a retention policy of 16 hours and a poll interval of one hour, then we have 16 chances to read data.  If during those 16 individual read attempts, data was not returned it was lost.

What happened is that we were missing critical event data and we couldn’t figure out why.  It took some time before I figured out that you have to ask for the data until it is returned.  That was issue number one.65831061

We were losing messages

Now that we were able to get the data back, all of a sudden all the data was gone.  This was really baffling!  I thought we had solved our problems with receiving the data, but to the outside it looked as if we were having the same issue again.  I couldn’t figure out why after 16 hours our queue was empty regardless of how recent the last message was published.

I did all the reading that someone should have to do in a lifetime (except for you, please continue reading) and I couldn’t solve it.  So I turned to the Kafka mailing list for help.  It turns out that Kafka will delete the log file with the message that is outside of the retention policy.  This was exactly what we were seeing.

We could send a steady stream of data and like clockwork, all of it would be gone once the flush began.  It turns out that the initial log file is a gigabyte in size.  Remember, my volumes are very low and we wouldn’t fill that up in a year.  That could be solved by setting the log file size really low, we set it to 1024B.

We were losing messages

That brings us to our third and last issue.  The straw that broke the camel’s back.  Nail in the coffin.  Ok, I will stop.  Now we are receiving data reliably and our logs files are right sized, what else could be going on?

With their rest client, there are two methods of committing back an offset when operating in a group.  You can auto-commit where you set your cursor to the last entry that was returned or you can wait and commit that cursor position once you are done with the data.  To be fair, we had some issues in our code that was causing the processing to halt and stop processing messages.  These were messages that were already committed, but were not processed.

Without the ability to grab a single message at a time we were stuck.  We had hoped that Confluent 3.0 (Kafka 0.10) was going to save the day with the max.poll.records, but they didn’t roll that into the rest client.  Disappointed, we realized that we had really hit a wall.

We sucked it up and decided to turn our backs to Kafka for now.  We were diligent to create abstractions that will allow us to change with reasonable ease.  We will be taking a day to research and design what the new solution will be.  I think that this was a good lesson on picking a solution that matches the current use case.  Even though I really wanted to set us up to use Kafka for my grander vision, it just wasn’t the right choice.

I haven’t turned my back on Kafka completely, I still think it is awesome and will have a home with us in the future.  Sadly, for now I can’t fit your size so I will have to leave you on the rack.  Goodbye.

 

 

Is Scrum Agile?

You may think that the title is utterly ridiculous, but bear with me.  I recently had the opportunity to sit through a class with Allen Holub on Designing for Volatility and it was there that he disrupted one of my long held beliefs.  I was trained on Scrum by Ken Schwaber in 2008 and again in 2012, so I was sold but now I am thinking a little different.  I want to explore the question of “Is Scrum Agile“?

Scrum works on a timed boundary that begins with a planning session and ends with a review/retrospective.  These are designed to setup an interval (sprint) where the work is immutable. Typically a team sets an interval boundary of two, three or four week intervals.    If there is a necessary change in the work, it does have an allowance to abort the interval and start over.

Aborting a sprint is a significant decision and activity and is not to be taken lightly.   In this event, the team would stop what they are doing, button up, and start over with a planning session to plan the new work.  This seems to be agile, but…agile  What I see all too often is where we try to make sure we have enough work for the team to work on.  As opposed to make sure that we are delivering the most value to the customer as fast as a quality job can provide.

I believe that trying to find enough items so that each member is busy may start to divert from agile.  If this is your practice, you will inevitable set lower value items higher in the backlog to fill the team’s time.  That seems to stand in the face of the agile principle of delivering the highest value items as fast as possible.

Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software” – Agile Manifesto

The team should always be working on the next highest priority items, no exception.  I would guess the question that follows that statement is, “What will the rest of the team members do during the sprint?”  It is a good question to ask, but it is also easy to answer.

In a development cycle, there are many activities that have to take place such as requirement refinement, test case development, test automation and of course development.  Teams can rally around a single item to see it through to the release.  I was very skeptical about swarming around a user story and I assumed it was full of waste, but I have since been proven wrong.

Swarming around the user stories is the optimal activity of a self-organizing team.  If you have a cross functional development team (quality, development and business) then you execute on another one of the principles.

The best architectures, requirements, and designs emerge from self-organizing teams. – Agile Manifesto

Another comparison that is a divergence from agile is the idea of continuous delivery.  The models that I have seen, creates the activity of deployment at the end of a sprint.  This is in opposition to the first principle in the agile manifest.

Our highest priority is to satisfy the customer through early and continuous delivery
of valuable software – Agile Manifesto

Now to be fair, there is nothing in scrum that says you cannot deliver software as often as possible.  The goal of each sprint is to complete a potentially deploy-able increment of software.  At face value,  this end of sprint demarcation promotes deployment at the end instead of when it is ready.

One of the main things that Allen drove home was how agile means that you are always working on the highest priority without the need for artificial boundaries.  I think I agree with that.  Scrum has several ceremonies that occur every sprint and I wonder how many of them are needed in that regimented fashion.

Whether you are moving around backlog items to fill time or you are waiting till the end of the interval to deploy, you have to ask if you are really an agile shop.  The team that I am on is taking a more agile approach.  We are a scrum shop, so we have to operate at some level within that process.  We have only been taking in one work item at a time and we swarm until it is done and then we ask the product owner what is next.    As a self-organizing team, we have decided that this is what allows us to be agile and how we can deliver quality stories to the customer and it works.  This are just my thoughts, but I would love to debate this further.  Cheers

Reading Rainbow, err audio book rainbow

Growing up I was never interested in reading books. Up until I join the military, I could count the number of fiction books that I had read on one hand.  They just couldn’t keep my interest.

One of the issues with reading books is that I have a hard time focusing for more than a minute or two.  And that is meant to be literal.  What this means is that I end up having to re-read a sentence, paragraph several times in order to grasp it.  This leads to a great deal of frustration and surprisingly peaceful slumber.  To read perchance to dream 🙂

There was one day that actually changed my life, and the lesson wasn’t intended for me.  I was standing in the hallway of the van that we worked in, when I overheard my sergeant tell another marine to read a book on combustion engines.  My friend replied with “Why would I read about engines?”.  The response is what lit the fire.  The sergeant explained  that reading anything, regardless of immediate need because it will teach you to learn and thereby improving your problem solving.

I took that lesson and ran with it.  At the time, my job was as a calibrator (making sure things are measured correctly) and it was not very exciting.  Once I got out I was working in the civilian world doing the same thing, and it was worse.   It was a rat race, the only goal was to do as much work as we could as fast as we could to increase our billing.

I couldn’t stand to do this work to much longer, I have a hard time doing repetitive work, I go crazy.  I decided that I wanted to write software to do the work for me (I will spare you the details).  Taking the lesson that I overheard, I went to town.

I started reading books, magazines and anything else I could get my hands on to help me learn how to develop software.  Somehow I was now able to read and read without falling asleep and staying up all night to learn more.  Interestingly enough, I was learning to program on my own.

Here I am, 15 years later and I am still reading.  Not as much as I would like, but things are changing.  Remembering that I had the issue with having to re-read somethings several times I have been turned on to audiobooks.  These are great because audible has the feature to go forward or backwards by 30 seconds.  This works for me and I do work those buttons like a pro.

I still haven’t got around to listening to fiction, but there is always hope.  I am on a leadership book binge currently and the next topic that I was to listen to is history.  As I get older, I realize that you have to read and read often so that you can stay sharp and in my field, relevant.

I am not sure that I told me sergeant about the lesson that he taught, but it was one of the best lessons that I have learned in life and I thank him for that.  I wish everyone could experience when that torch is lit.

First Chance Exception Settings, Your Friend

“I am trying to debug, but this stupid exception keeps happening”.  I have seen that situation play out many times over the years.  You are trying to exercise your code, but the same spot keeps throwing an exception.  This doesn’t impact your functionality, but it’s an annoyance.

Sometimes it takes a while for the frustration to hit a peak, but when it does you are besides yourself.  There is functionality in Visual Studio to help you out, but before that let’s talk philosophy.

There are many schools of thought with regards to developing code.  On one side, if your tests (unit,integration,etc.) are not passing then you cannot check-in.  This would imply that if the code base you are working on is throwing exceptions, then you cannot check-in until it is fixed.  Under this philosophy, everything stops until the code is in good working order.

On the other side, you have blinders on and you only focus on getting your code to work.  In this scenario it is OK to skip over some exceptions, because it is someone else’s problem.  I don’t have to preach about the issue with the “someone else’s” problem, but that does not help your team.

The scenario that I left out  is less philosophy and more practice.  Sometimes in the code there are bugs that are OK to not be fixed.  This could be for many reasons, like priority or planned obsolescence.

I alway recommend that my team members use a little known set of toggles in Visual Studio called Exception Settings.  Exception settings allow you to have a first crack at code (first chance) that is about to throw an exception.

ExceptionSettings

Without having this functionality turned on, the code will throw and you may not be able to proceed.  The first chance exceptions would allow you to move the stack pointer to a different line that would function normally and continue.  This is very powerful when you are trying to figure out where an exception is coming from and the state of the system (threads, call stack, variables, etc…) when it happens.

I have been in situations where it was OK and even expected for code to throw, but it I don’t want to have to view it each time.  In this case, Visual Studio allows you to deselect the exceptions that you don’t care about.  You can see below that I chosen to to break on all exceptions except for Microsft.JScript.JScriptException.

ExceptionSettings_CLR

To use this functionality you have to set these toggles in advance of starting your application.   Consider this, you are running a long process, but you got interrupted by an exception that you did not expect, but you do not care if it throws.  In that case, you have the option to turn off first chance exceptions for the exception type for all future executions.  You need only deselect the “Break when this exception type is thrown” checkbox.

ExceptionSettingsDisable

I the case above, the exception is expected as part of the SSPI authentication process so I can ignore this exception.

It doesn’t matter which school of thought you or your team subscribes to, but the ability to toggle first chance exceptions is an important hammer to have in your toolbox.

Side Bar:  I can’t recommend enough to turn on this functionality.  It has helped me catch a lot of bugs before they got into the wild.  Hope this helps, cheers.

SSH.NET & echo | sudo -S

This is an extension on yesterdays post about getting the test harness to connect to vSphere.  I am going to show how to use SSH.NET to run commands on a server, but more specifically how to make sudo calls remotely.

The goal for today was to take the new fresh vm server and install and configure Kafka.  Sounds easy, right?  I use SSH.NET to upload all of the files that I need as well as a shell script to orchestrate the operations.  We are using systemctl as the daemon manager, so we need the *.service files to be in the right directory.

/usr/lib/systemd/system

There is one snag.  You need to sudo because that directory is protected. Looking at some of the options for sudo, I came across the -S toggle.  This allows sudo to take the password from the standard input.

echo "my_password" | sudo mv myfile /protected_directory/

This will take the password and pipe it into the sudo move command.  I tried a bunch of ways to try to make this work.

using (var client = new SshClient("my_host_server", "my_user_name", "my_password"))
{
     client.Connect();
     var command = 
         client.CreateCommand(
         "echo 'my_password' | sudo -S sh ./home/my_user_name/scripts/install_and_configure.sh");
     
     var output = command.Execute();
     client.Disconnect();
}

This command works great on the server, but it doesn’t work in the SshClient’s session. I tried a bunch of variations of the code above, but none of them worked.  It was pretty frustrating. If you take a peek at the out the Execute() method, you will see the message sorry you need a tty to run sudo.  This is a significant clue.  A tty, or terminal emulator is what you would use if you created a ssh session with putty.   When we are using SSH.Net like we are above, we do not have a virtual terminal running.  That is what the error is telling us.  I started to piece together what might look like a solution after googling the interwebs.

download (4)

We need to somehow emulate a tty terminal using the library.

First thing that we need to do is create a new client and create a session.

SshClient client = new SshClient(server_address, 22, login, password);
client.Connect();

We need to start creating the terminal that will be used by the ShellStream.  First we have to create a dictionary of the terminal modes that we want to enable.

IDictionary<Renci.SshNet.Common.TerminalModes, uint> modes = 
new Dictionary<Renci.SshNet.Common.TerminalModes, uint>();
termkvp.Add(Renci.SshNet.Common.TerminalModes.ECHO, 53);

Adding the terminal mode to 53 or ECHO, will enable the echo functionality.  Now we need to create our ShellStream.  So we need to specify the terminal emulator that we want, the dimensions for the terminal and lastly the modes that we want to enable.

ShellStream shellStream = 
sshClient.CreateShellStream("xterm", 80, 24, 800, 600, 1024, modes);

Now that we have a terminal emulator to work with, we can start sending commands.  There are three commands that we need and they each will have a response that we expect.

  1. Login
  2. Send our sudo command
  3. Send the password

We have already created our session, logged in, so there should be an output waiting for us.  After we send our command, we should expect the password prompt.  Once we know that we are in the right spot, we can forward on the password.

var output = shellStream.Expect(new Regex(@"[$>]")); 

shellStream.WriteLine("sudo sh /home/my_user_name/scripts/install_and_configure.sh"); 
output = shellStream.Expect(new Regex(@"([$#>:])"));
shellStream.WriteLine(password);

At this point, we should have execute our install_and_configure.sh script successfully. Putting it all together:

SshClient client = new SshClient(server_address, 22, login, password);
client.Connect();

IDictionary<Renci.SshNet.Common.TerminalModes, uint> modes = 
new Dictionary<Renci.SshNet.Common.TerminalModes, uint>();
termkvp.Add(Renci.SshNet.Common.TerminalModes.ECHO, 53);

ShellStream shellStream = 
sshClient.CreateShellStream("xterm", 80, 24, 800, 600, 1024, modes);
var output = shellStream.Expect(new Regex(@"[$>]")); 

shellStream.WriteLine("sudo sh /home/my_user_name/scripts/install_and_configure.sh"); 
output = shellStream.Expect(new Regex(@"([$#>:])"));
shellStream.WriteLine(password);
client.Disconnect();

That is pretty much it.  I hope that this helps someone! Cheers