Saturday, August 12, 2017

Taking the Risk

Okay, so I did it. On Tuesday, I married my best friend, Kelly.

This is the high risk decision that I first mentioned back in November and again in January. It was a really huge step for both of us. I can't think of anyone else I'd rather share my journey with (obviously). We've been friends since college -- which was *many* years ago -- and we make awesome partners.

Gettin' Hitched
We went to the courthouse, and most of our immediate family shared the ceremony with us.

I leave it up to you to try to figure out which family members go with each of us.

This is why I moved to Washington back in June, which was a huge step in itself: I'd lived in the same apartment for 15 years and the same city for over 20. We're a bit in the wilderness (it's 30 miles to the nearest traffic signal), so that has been a transition for me. But it's really beautiful out here. I'm totally okay with trading 7-Eleven for wild blackberries.

I've taken lots of pictures since I've arrived: Living in Skagit County. (And a few when I visited in February: 1250 Miles, 2 Cats, and a Banjo.)

So, how do I know that Kelly is the right partner for me? Well, first of all, she made this chart for "Sharing Household Chores":

Next, when I saw these salt and pepper shakers at Target, she said that we had to buy them:

I don't think we need any more proof than that. (Although we do have some other things in common, too.)

Blended Family
Kelly and I have already seen some of the joys of our blended family: she had 2 dogs and I had 2 cats. Now together we have 2 dogs and 2 cats. The 6 of us are learning to get along together (sort of).

Toby & Amanda

Lulu & Mae

Lu & Toby trying to reach an agreement
Driving Home
The drive home from the courthouse was a bit tense. Each of us knew that we made the right decision, but the gravity of such a big step weighed us down a bit.

Fortunately, on the road home, we ran across a perfect example of "not my job". It was a bit unbelievable, so we had to turn around on the highway to get a picture:

The laughter was exactly what we needed.

To those of you who have been wishing me the best on my decision, Thank You.

Now the real adventure begins...

Tuesday, August 1, 2017

Jeremy on Technology and Friends

While I was in Detroit, I sat down with David Giard (@DavidGiard) for his Technology and Friends show. He gave me the chance to talk about my t-shirt and how I became a Social Developer.

Episode 490: Jeremy on Social Developers

If you'd like your own copy of the shirt, you can pick it up at the xkcd store: Just Shy.

Happy Coding!

Saturday, July 29, 2017

I'm Still Here

It's been a bit quiet around here. But there's still a lot going on.

Something Cool
A few weeks ago, I was in Detroit MI for Detroit.Code(). I showed my friend Cameron Presley (@pcameronpresley) some of the things I've been doing with mazes (code and articles). When I ran the sample, I got the longest path I've seen yet:

Pretty cool. It starts in the middle and loops around almost the entire maze before it gets to the end in the lower left corner.

The heat map isn't quite as interesting:

But you can see the lightest in the center and the darkest in the lower left.

Something interesting happened this past week with the #SpeakerConfessions tag on Twitter. This was started by my friends Sarah Withee (@geekygirlsarah) and Nate Taylor (@taylonr), and it went around the world. It was really interesting to hear things from a lot of my speaker friends and a lot of speakers I don't know.

A couple of my articles from this past year seemed particularly relevant. I sent out this one: Help Those Behind You, which talks about the dangers of comparing yourself to other people. We should really focus on helping others. The only comparison that is valid is comparing yourself now to yourself in the past. Refocus on the things that are moving yourself forward.

I was also happy to see other folks replying with this article: Speakers, Let's Change Our Terminology: No More Rejections. This is something I've had to struggle with in the past (very recent past), and it's kind of nice to see that I'm not the only one. But we've still got a long way to go.

In addition, there's an article that I will (hopefully) write in the next day or so that to tell some of my struggles with submitting to events, striving for the wrong things, and focusing on where my strengths really are. We'll see if I can get that out before I go to Kansas City.

And speaking of Kansas City, I've been frantically preparing for the Kansas City Developer Conference which is happening next week. I've got a full day workshop on Asynchronous Programming in C#, and I'm making sure the labs, samples, and materials are all in good shape. I've still got a ways to go with that, but things will be ready on time.

In addition to the workshop, I get to do two of my favorite talks: Clean Code: Homicidal Maniacs Read Code, Too! and DI Why? Getting a Grip on Dependency Injection.

A lot of my friends will be there, and I'm looking forward to meeting lots of new folks, too. It should be a really great week.

Other Distractions
I've had a lot of distractions that have kept me busy. About 6 weeks ago, I moved to Washington state. And it's really beautiful out here. It's strange how many things you have to do when you change states. The good news is that I do have a Washington driver's license, and I've also got Washington plates on my car. So I won't have anyone scowling at my California plates anymore.

In addition to my 2 cats, I'm now living with 2 dogs.

The move is part of something bigger that is coming up real soon. It's the answer to High Risk vs. Low Risk (and Update: High Risk vs. Low Risk) that I wrote about last year. I did make the decision, and I'll have an update to that in a few weeks.

I'll have more content soon. Once things get settled a bit more, I'll be able to concentrate on adding great things on a regular basis. Until then, make sure you stay focused on the important things in life.

Happy Coding!

Wednesday, July 5, 2017

More Maze Programming: Adding Some Bias for Longer Paths

Okay, so I know that I said I might be done with Mazes for Programmers, But, I went one chapter further and picked up a couple of more algorithms: Hunt and Kill, and Recursive Backtracker.

As a reminder, you can pick up the code here: GItHub - jeremybytes/mazes-for-programmers.

The reason these are interesting is that they add a bit of bias. Last time, we saw an unbiased algorithm that is completely random. This was better than the very biased algorithms that we had seen previously, but it was also a bit slow since it relies on random walks to fill an entire grid.

This works great, but good mazes are more than random events. So by adding a bit of bias, we can make longer paths, twistier paths, and fewer dead-ends. This makes a maze more interesting to solve.

Hunt and Kill
The first slightly-biased algorithm is Hunt and Kill. This scans the cells from the top left corner. When it finds a cell that has "unvisited neighbors" (meaning there are no links to the cell), then it makes a random connection and moves to the next cell. When it can't go any further (such as when it loops into a dead end), it goes back to scanning to find a cell with unvisited neighbors so that it can start again.

Because this is not completely random, it is much faster than the Aldous-Broder algorithm that we saw last time. And it creates some longer passages. Here's a graphical version:

And here's a text version:

It's particularly easy to spot the longer passage here when we look at the path from the center to the lower right corner.

It also makes nice patterns for larger mazes:

I find myself getting lost in some of the images. The dark areas represent the spots furthest from the center. So the top image has lots of things "furthest" away. The bottom image has a few things "furthest" away.

Recursive Backtracker
The other slightly-biased algorithm is the Recursive Backtracker. Like Hunt and Kill, this creates longer passages with fewer dead ends. But it operates a bit differently. When it hits a dead end, it doesn't start scanning for a new starting point (like Hunt and Kill). Instead, it keeps track of all the cells that have been visited by using a stack. If it comes to a dead end, it pops cells off the stack until it finds one that has an unvisited neighbor (meaning, another path to be started). When all of the cells are popped off the stack, that means we've visited each cell and we're done.

Here's the graphical version of Recursive Backtracker on a small grid:

And here's the text version:

Again, we can see a pretty long, twisty path from the center to the corner. And this is just as interesting when we create larger mazes:

I kind of like these because you'll find these twisty paths of one color running through the middle of a section with a lighter or darker color.

In addition to the slightly different patterns that come out of these algorithms, there are a few performance differences. I've noticed that the Hunt and Kill takes a little bit longer to complete. The Recursive Backtracker is a little bit faster, but it consumes more memory since it's keeping track of every single cell on the stack. Each of these has pros and cons, and if you're interested, pick up your own copy of Mazes for Programmers (Amazon link).

What's Next
As you can see, I've switched up the colors a little for these samples. But this is done by changing values in the code. Now that I'm seeing the interesting patterns that are created by these, I'm going to explore adding some more interesting color elements as well.

I've always been intrigued by pseudo-randomly created pictures. I can see that I'm going to be running quite a few of these in the future.

As always, keep playing and keep exploring.

Happy Coding!

Sunday, July 2, 2017

More Maze Programming: A Non-Biased Algorithm

So back to Mazes for Programmers, this time looking at a non-biased algorithm. This is where I start thinking about repurposing this whole maze-building thing.

As a reminder, you can get the code here: GitHub - jeremybytes/mazes-for-programmers. I added some comments to the "Program.cs" file so that you can do some experimentation with different algorithms, grids, and grid sizes.

Biased Algorithms
The two algorithms previously implemented were Sidewinder and Binary Tree. These both have biases, meaning there are certain patterns that repeat based on the algorithm. For example, the binary tree algorithm tends to create a diagonal across the maze. This is visible with a 15x15 maze:

But it's even more obvious when we have a larger maze (155 x 155):

Another thing is that this produces a straight path along the top (north) edge of the maze as well as the right (east) edge of the maze.

Non-Biased Algorithms
Mazes for Programmers presents 2 non-biased algorithms. I've implemented one of them: the Aldous-Broder algorithm. This was independently developed by David Aldous and Andrei Broder. It uses a random walk to create a non-biased algorithm.

One of the important points for a maze algorithm is that is should not create loops, and this algorithm does ensure that while still creating random paths. Basically, it creates random paths until they link up. If a loop is created, then that path is discarded.

Here's the Ruby code from the book:

And here's my implementation in C#:

This produces a random path. Here's a text representation showing the shortest path from the center to the lower-left corner:

And here's a graphical representation showing a heat map of distances from the center:

What's interesting is that if we run this multiple times, we won't see a pattern forming (such as the diagonal pattern formed with the Binary Tree). Here are several runs with a larger grid:

This starts to look pretty interesting. Unfortunately, because of the random nature of the algorithm, it gets much slower the larger the grid, and I don't think there's an easy way to parallelize the process.

As a side note, the other non-biased algorithm presented in the book, Wilson's algorithm, uses a random walk as well, and it suffers from a similar performance issue. The difference is that while Aldous-Broder has slowness at the end (while it tries to hook up the last few cells), Wilson's has slowness at the front (where it tries to hook up the first cell). I haven't done any performance analysis, and I think that's outside the scope of my interest right now.

Accidental Art
Here's a 255 x 255 grid. This took several minutes to produce on my machine. But because it uses random numbers, the time to produce it is non-deterministic.

This is really cool. I think it's time for me to repurpose this and start creating interesting visual patterns. I could probably set up some color transitions in addition to the shade transitions of the heat map. I'll be playing with this quite a bit in the near future.

Wrap Up
I don't know how much further I'll get in the Mazes for Programmers book. I think I may have gotten the information I need regarding the algorithms and how things work. This is a good jumping off point for me to explore on my own.

I'll be playing with the heat map quite a bit. It's easy to remove the maze lines and just have the colors (I did that accidentally last night). And adding some color transitions would be pretty cool.

I also need to go through the F# code that Steve Gilham put together. It looks like I'll need to decompose things a bit to make it work with various display output (it does text right now, but I'd want to add the graphical display) as well as to support the distance calculations that make the paths and heat maps possible. So I've got lots to play with.

I've also got lots of real work to do as well. But it's also good to play and explore from time to time. That's part of the learning process.

Happy Coding!

Saturday, July 1, 2017

More Maze Programming: Heat Map

It's been a while since I've pulled out Mazes for Programmers (it looks like it was March). I picked it up again tonight to do a bit of programming. This time, I added the code to create a heat map of a maze.

You can get the current code here: GitHub - jeremybytes/mazes-for-programmers.

Here's a sample of the output:

This uses the center square as the "start". The colors get darker as the path to each square is further away.

The code in its current state produces 2 outputs. The example above is a .png file that is saved to the file system (and automatically shown by the console application).

The console application also generates a text output:

If you compare this to the graphic output, you can see that these are the same maze. The numbers here are a bit different. Instead of creating a heat map of distances, this shows the shortest path from the start (the center of the grid) to the finish (the lower left corner). By following the path, this shows that 34 steps are required to get from the center to the finish.

The code isn't great at this point. The reason is that I'm doing a straight-across port from the Ruby code that is presented in the book. And I'm also not a big fan of the way the objects are put together in the book code. But I'm still working on the underlying concepts of mazes, and then I can work on making the code a bit better.

As stated previously, I'd like to get some F# code in here, and I've gotten some great contributions from the community including this sample from Steve Gilham: Steve's F# Implementation. I haven't had a chance to dig through Steve's sample. From my initial look, I can tell that I need to think about breaking down problems in a different way than I'm used to.

I'm looking forward to diving into that more.

Happy Coding!

Friday, June 30, 2017

Book Review: Working Effectively with Unit Tests

I recently finished reading Working Effectively with Unit Tests by Jay Fields (Amazon link). I'm a bit mixed on whether I would recommend this book. There are some good unit testing tips, but it wasn't especially memorable, and there were a few things that I didn't care for too much.

The short version is that my main recommendation for unit testing techniques will remain The Art of Unit Testing by Roy Osherove (Jeremy's review).

Things I Liked
Keep What Works
There were a few general messages that I really liked. The first is try stuff and keep what works. I always prefer a non-dogmatic approach because not every technique works in every environment. I really appreciate that Fields talks about some of the things that he's done in the past, liked them, but then later stopped using them because they no longer fit his situation.

Descriptive Tests
Another message that I really liked is to keep tests DAMP (Descriptive And Maintainable Procedures) as opposed to DRY (Don't Repeat Yourself). Keeping common code centralized is a good practice for our production code, but testing code is a bit different. Tests are really meant to be independent, isolated, and not interact with each other, whereas our production code needs to be cohesive and collaborative.

This really follows my general advice to make sure that tests are readable. It's good to be a bit more verbose and explicit so that they are very easy to approach when we need to look at the actual test code.

Setup Methods
As far as specific recommendations, there are several that I found useful. First is generally avoiding setup methods. By keeping setup (the "Arrange" step) local to the test, it enhances readability and helps make sure we are only using the things that we need for a particular test.

For example, in a test class-level setup method, we may have multiple objects instantiated with various states that we can use in different tests. This means that we could have objects that are instantiated in a setup but are not actually used for a test. This means that we've got some wasted code, and it also makes it a bit harder for us to determine exactly what's important for a particular test.

I've explored something along these lines (Unit Testing: Setup Methods or Not?), although I tended to use factory methods that are explicitly called to get some of the non-critical bits out of the tests themselves. Fields' technique is definitely worth exploring some more.

There are a couple pieces of good advice regarding assertions, including one assertion per test. This is particularly important because most testing frameworks use exceptions for tests. So when the first assertion fails, no code after it will run. If we have several assertions, we don't know if we have a single failure or multiple failures. If we keep one assertion per test, then we can tell where our problems are. This also goes along with the DAMP approach; we don't need to be afraid of duplicating behavior in unit tests.

Another piece of advice is to assert last. The thing that we are actually verifying should be at the very end of the test method. This makes it really easy to find what we're testing. And this also goes along with the "Arrange/Act/Assert" layout which I really like.

Fields also spends time talking about testing for exceptions and some of the weirdness that is caused by using a try/catch block. When using a try/catch block, the assertion is in the middle of code (usually in the catch block), and we also need to have a "fail" in the middle of the code if an exception is not thrown.

To get around this, Fields suggests making a custom assertion method, Assert.Throws(), that can be used to check for exceptions without a try/catch block, and can also be put at the end of the test. That way we can follow the "assert last" advice. This is similar to the "Assert.Throws()" that is provided with NUnit (and other frameworks that provide custom assertions). I wrote a bit about this in Testing for Exceptions with NUnit.

Things I Have Mixed Feelings About
Solitary and Sociable
There were a few things that I had mixed feelings about. One of them had to do with Solitary Unit Tests vs. Sociable Unit Tests.

A solitary unit test is a test where only 1 object is "new"ed up. Everything else is some sort of test double, like a stub or mock. A sociable unit test has multiple objected "new"ed and checks the interaction between them.

I don't really like the definition of sociable unit test because to me that steps outside of the world of unit testing and starts moving into integration testing. Fields does mention integration testing, but he looks at that as more of an end-to-end type thing. I've generally looked at integration testing as checking that the objects work together at different levels -- from localized to end-to-end.

This isn't really important in the grand scheme of things. I just seem to put all of my "unit tests" into the "solitary unit test" bucket.

Fields does make the recommendation of separating the solitary unit tests and sociable unit tests. The solitary ones are generally very fast and we want to run those very often, and the sociable ones may take a bit longer (for example, if there are I/O operations) and we probably run those less frequently. This is very good advice.

Sample Scenario
Another thing that I was mixed about is the sample scenario. I do like that the same application code was used throughout the entire book. But the scenario was a video rental store. Since the book is from 2014, this example was a bit out of date when it was published. As someone who is well over 40, I have no problems remembering what it was like to rent movies from a physical store. But I'm sure there are lots of developers today who have not had that experience.

Again, not a big deal. The samples could easily be updated to use a video kiosk rather than store.

Things I Didn't Like
The First Test
While I like that the same scenario is used throughout the book and that there was a focus on continuously improving the tests, I really did not like the first example.

The reason is that Fields states, "Let's get straight to code" and then shows a rather-difficult-to-read example and says "you don't need to understand this." From my perspective, that defeats the purpose of going straight to code.

Another thing that left a bit of a bad taste had to do with test motivators. Fields spends some time telling us that it is important to understand our motivation for writing tests in order to write effective tests that match the motivation. He also spends quite a bit of time listing different motivators. The problem is that these motivators are not used in the rest of the book (unless I missed it). So it looks like we're being told that it's important to understand the motivation, but doesn't tell us how that impacts things in a practical way.

Another thing I don't agree with is Fields' opinion on naming tests. He likens test names to comments. Since the test methods are never called directly, the test names are unnecessary and at worst can add confusion. I agree that test names are comments, but I have seen usefulness in that. When a test has a good name, when it fails I can tell what happened simply by looking at the test explorer; I don't need to dig into details to see what went wrong (at least, I don't need to dig into the test details nearly as often).

I would liken test names more to variable names than comments. When we have useful variable names, it enhances the readability of our code. This is why I'll often include an intermediate variable. Then I can include some "comments" about what it is doing by giving it a good name, even if the code itself is not that hard to understand.

The Last Test
While I like most of the techniques presented, I wasn't a big fan of the final test state. His use of Test Data Builders throughout the book were interesting, and they gave me some ideas that I would like to work through. But the conclusion was a bit of a logical extreme:

Here's the text

public class CustomerTest {
  public void chargeForTwoRentals() {
          stub(Rental class)
          stub(Rental class)

For someone walking up to this test for the first time, it is a bit confusing. I think it's because the Arrange, Act, and Assert all get mixed together. Fields promotes this as a good choice because it does follow "assert last" (although it's basically crammed the entire test into the assertion).

To pick this apart a bit, the behavior that we're testing (the "Act") is at the very end: "getTotalCharge()". The "Arrange" is really everything after "" which creates an object with test data so that we can call "getTotalCharge()" on a populated object. The "Assert" is the last step, but also the first step (which is weird in my mind).

I think that the test data builders can be very useful, but I would really like to see a standard "Arrange/Act/Assert" layout with some intermediate variables for readability.

Wrap Up
While Working Effectively with Unit Tests by Jay Fields does have some good advice, I think the drawbacks keep me from making it a recommendation. For general testing advice, I'd still go with The Art of Unit Testing by Roy Osherove.

Happy Coding!