+1 (669) 231-3838 or +1 (800) 930-5144

The Robot Uprising – A Tale of Automated Testing

Ah testing, it’s one of those love/hate relationship type things. I see writing tests like brushing your teeth, it’s just something you need to do because if you don’t, you’ll regret it in the long term when everything goes wrong and it’s going to cost you big time to fix! So, just like maintaining your dental hygiene can be a tedious task, so can test automation but alas, there are tools out there to make life a bit easier and choosing the right one can save you a lot of time, money and effort.

Now as you, my astute readers might know, there are many different types of testing. In fact, this website lists over 100 different types of testing! That’s a little too many for my liking, but hey it’ll hopefully keep us employed during the robot uprising, given that they’ll need to be tested as well. Speaking of robots, this blog is primarily about the Robot Framework and how it has done wonders for at least one of those 100 types of testing – Acceptance Testing, and really what else is more important than your customer accepting your solution and handing over your hard worked earnings.

Here comes the obligatory “what is the Robot Framework?” section. Well, the Robot Framework is used for acceptance test driven development (ATDD). What that basically means is you have promised that your solution can do X,Y, and Z but you may or may not have developed the functionality. So, by using Robots keyword and data driven test automation capability, your write test suites, and test cases that invoke REST APIs, run CLI commands over SSHclick things on a webpage and basically prove that your solution does what your promised it would do, so when you run the entire test suite and it comes out all green, you can comfortably look your customer in the eye and say “show me the money!”.

Okay great, now that we know what the Robot Framework is, and why it’s useful, how do we use it? Best place to start is the quick start article, but we will cover the basic concepts here. So first thing’s first, we need to install it. Installing Robot is simple if you have python and pip installed. Simply run pip install robotframework. You will now be able to run robot –help and see all the wonderful different ways you can run your tests (see executing test cases for more information).

I find examples helpful when trying to get concepts across, so for the remainder of this blog, let’s pretend our company Asimov General Appliances is creating a REST API driven toaster. This toaster is very much like your printer, you load a loaf of bread in at the beginning of the week and you use an app on your smartphone to create toast on demand, heck let’s add Siri and scheduling integration as well. Now we as testers have been commissioned to write our automated tests using the Robot Framework to ensure this amazing new toaster does what the marketing and sales departments says it does and it will be up to the developers to implement it. Voila, you have acceptance test driven development!

The bulk of the action happens within a test case, a test case can look a little like this:

*** Test Cases *** 
Toaster can make toast    # Test case name, it represents a use case we are trying to prove works 
    [Documentation]    Tests if toaster can make eatable toast, assumes fresh bread is in spooler 
    Feed Bread In To Toaster # A keyword with no argument that does some action  
    Start Toasting 
    Wait For Toast    30 # Another keyword this time with an argument, saying lets wait 30 seconds  
    Stop Toasting 
    ${toast}=  Eject Toast    # A keyword that returns a value, that we can assert if it matches some criteria 
    Should be toasted and eatable  ${toast} # A keyword doing some assertion with an argument, using the toast 
    variable we received from the previous keyword.

This is an example of a workflow driven test, the workflow being a particular way your solution might be used which meets some acceptance criteria. You might be thinking this is some sort of witchcraft, to write such plain English and have some toast actually be made. So let me explain what is happening under the hood.

Ultimately the keywords are where the magic takes place. Keywords come from two places: Libraries or from the keywords we as testers write, also known as User Keywords. Library keywords are implemented in standard programming languages, typically Python. User keywords are just higher level keywords that encapsulate one or more other keywords. These are typically defined within the Test Suite (a collection of test cases), or in a Resource file (which is just a collection of keywords). In our example, all of the keywords are User keywords that in turn use Library keywords to make REST API calls to the toaster to perform the tasks. If any of the keywords fail, the test case will fail, and in the resulting report and log file that is generated this can be inspected and understood by both non-technical and technical personnel allowing bugs to be quickly identified.

The other important aspect of the Robot Framework to grok are the use of Variables. Any test case will be subject to change, such as the time we want to wait for toast to be completed, or the number of concurrent toasts we want to make. By using variables, we can write a test once, making it flexible enough to test a range of scenarios. More information on variables can be found here.

Now to sprinkle some words of wisdom I have garnered from using the robot framework.
Follow a style guide – Using a style guide gives your tests structure, you can even enforce a style with a linter such as robotframework-lint. This guide is a good start to form your style guide.
Prefix all keywords with the Library or Resource they originate from – This will help your team members know where User Keywords originate from (as they could be in the test case file, a library or a resource), to better understand them and to troubleshoot issues if need be.
Be consistent with naming and delimiters – As Robot uses a tabular syntax separated by two or more spaces, or the pipe character, there’s nothing worse than seeing both mixed, or two space in some spaces and four or more in others. Stick to one and try enforce it.
Simple folder structure – Now there is no real best way to organise your robot framework test suite, but I would follow something similar to this:

.
├── libraries
│   └── custom_lib.py
├── resources
│   └── toaster.robot
├── tests
│   └── api
│       ├── 00_authentication.robot
│       ├── 01_temperature_control.robot
│       └── __init__.robot
└── variables

I hope that this short blog on the Robot Framework and ATDD has made getting started with the framework a little less daunting. It really is an easy to use and valuable tool. The existing libraries help you automate many of the tasks you would have to do manually, and if you don’t find one that does what you need it to do, it is easy to create using Python or Java. The main thing to remember is to stay organised, use a convention and stick to it, it makes collaboration and troubleshooting a lot easier.
Well that’s all folks, happy testing!

Originally Published on the [https://netdevservice.atlassian.net] on 5/20/17

From Open Source to Product; A Look Inside the Sausage Making Factory

I’ve spent the last few months working closely with the OpenDaylight and OpenStack developer teams here at Brocade and I’ve gained a heightened appreciation for how hard it is to turn a giant pile of source code from an open source project into something that customers can deploy and rely on.

Kevin Woods

Kevin Woods

Not to criticize open source in any way – it’s a great thing.   These new open source projects in the networking industry, such as OpenDaylight, OpenNFV and OpenStack are going to do great things to advance networking technology.

No, it’s just the day to day grind of delivering a real product that challenges our team every day.

On any given day, when we are trying to build the code, we’ll get new random errors and in many cases it’s not immediately obvious where the problem is.   In another test we’ll get unexpected compatibility problems between different controller elements.   Again, somebody made a change and you can’t trace the problem.  On some days, certain features will stop working for no known reason.  Because of the above, we need to continuously update and revise test automation and test plans – that is also done daily.

When it comes to debugging a problem, unless you’re working with the source code and regularly navigating to find problems, diagnosis is difficult.    Some of the controller internals are extremely complex, for example the MD-SAL.   Digging into that to make either enhancements or fixes is not for the faint of heart.

The OpenDaylight controller is actually several projects that must be built separately and then loaded via Karaf.  This can be non-intuitive.

Another area of complexity is around managing your own development train.   If you’re going to have a non-forked controller that stays very close to the open source, you cannot risk being totally dependent upon the project (for the above reasons and others), and so you basically have to manage a parallel development thread.   At the same time, you find problems or want to make minor enhancements that you need in service, but cannot contribute immediately back to the project (that takes some review and time).    So you’re left with this problem of branching and re-converging all the time.   Balancing the pace of this with the projects pace is a challenge every day.

Then there’s all the maintenance associated with managing our own development thread, supporting internal teams, maintaining and fixing the documentation etc.   Contributing or committing code back to the project, when needed, is not a slam dunk either.   There is a commit and review process for that.  It takes some time and effort.

I think we’ll find the quality of the new Helium release to be significantly better than Hydrogen.  Lithium will no doubt be an improvement over Helium and so on.   The number of features and capabilities of the controller will also increase rapidly.

But after going through this product development effort the last few months I have a real appreciation for the value that a commercial distribution can bring.    And that’s just for the controller itself – what about support, training and so on?    Well, I leave those things for another blog.

Originally Published on the Brocade Community on 10/9/2014

Brocade SDN Solutions Help Customers to Move to an Open, Reliable and Scalable Architecture

Brocade today announces the availability of the Brocade SDN Controller based on OpenDaylight’s Boron release that took place last week. Brocade SDN Controller is an open source architecture that’s fully tested, documented, and quality assured. We are the first vendor to commercially distribute OpenDaylight.

In the release of the Brocade SDN Controller 4.0, the engineers have fine-tuned the code in the OpenDaylight upstream by providing various scripts and tools to make it easier for the customers to utilize the functionality of high availability, backup, and restore with ease and keep the customers environment up and running across geographies. This way it ensures that the customer SLAs are not disturbed.

The engineers have also developed the smarts wherein the customers can export the data file from the database of the older version and import it into the latest version while deploying the software. This enhances the ease of transition and upgrades to the latest versions with minimal error, thereby improving operational efficiency and productivity.

One of the greatest attestments to this is Brocade’s win at Arizona State University. Arizona State University, ranked one of the “most innovative schools” by U.S. News & World Report, has continued on its groundbreaking path by using software-defined networking tools developed by Brocade. At any given time at the university, there are 250 research projects that undergraduates, graduate students and post-doctoral students are working on. Many of the roughly 90,000 students on campus are also using mobile classroom tools.

Jay Etchings, Director of Research Computing at ASU, said in one of his interviews that they would like to see “at a moment’s glance” if there is a problem on the network, in real time. “Brocade was able to give us a package, and that package included some highly dense devices. Brocade was able to meet many, if not almost all, of our security requirements.” ASU has deployed Brocade’s MLXe Core Routers, SDN controller, and Brocade Flow Optimizer and achieved significant success. Etchings said “It’s a simple, easy-to-use interface, so that’s very nice for us,” he said. “It requires less maintenance, because we can give folks access to devices they need and not have to manage their accounts.”

Read Brocade Press Release to get more details of the ASU win.

Brocade Flow Optimizer (BFO) helps improve business agility by streamlining SDN and existing network operations via policy-driven visibility and control of network flows. It provides distributed attack mitigation by programmatically sensing and clipping DDoS flows at router and switch ports. It extracts network-wide visibility of Layer 2 through Layer 4 traffic flows through sFlow and OpenFlow collected from network devices, delivering real-time control of flows (drop, meter, remark, mirror, normal forward) through OpenFlow rules pushed to entire network for deterministic forwarding driven by policy. Customers can automate polices applied via an embedded UI or through open APIs.

Learn more about Brocade SDN Controller and Brocade Flow Optimizer.
Download free trial bits of Brocade SDN Controller and Brocade Flow Optimizer.

Originally Posted on the Brocade Community on 9/27/22016

Pin It on Pinterest