Latest Event Updates

Dropwizard, CDI and Activiti

Posted on Updated on

As I like Dropwizard and CDI and as I started to study Activiti, I wanted to see how they could work together. The idea is to deploy a test bpmn that outputs a “hello world” and run it with a simple Dropwizard REST call using only pure Activiti Java API.

Ingredients

Dropwizard 0.9.2 (http://www.dropwizard.io/0.9.2/docs/)

CDI (http://weld.cdi-spec.org/)

Activiti 6.0.0. Beta2 (http://activiti.org/download.html)

Postgresql 9.5 (http://www.postgresql.org/)

I’ve already created a Github project  on Dropwizard and CDI demonstrating how they work together. I’ve used that as a base. The sources are available here: https://github.com/mpevec/dwtest-weld.

From the Activiti stack I’ve only taken Activiti Engine and embedded it in Dropwizard (java) app. I haven’t used neither Activiti REST (it’s a WAR file that needs to be deployed on servlet container) nor Spring to configure Activiti Engine. For the Activiti database, the Postgresql database has been used.

Creating Activiti DB

After downloading  and installing Postgresql I’ve prepared the database for Activiti. I’ve created a Role “activiti” and with it created a database “activiti”.

In a zip file of Activiti 6.0.0.Beta2 there is a folder “database” and subfolder “create”. Inside, there are three SQL scripts that create all the DB objects needed by Activiti. You can easily run them within the newly created db.

Then I’ve created a DB configuration needed by Activiti inside the Dropwizard config.yml:

activiti:
   jdbcUrl: jdbc:postgresql://localhost:5432/activiti
   jdbcUsername: activiti
   jdbcPassword: activiti
   jdbcDriver: org.postgresql.Driver

POM configuration

There are the following dependencies within the POM file (for convenience I’ve left out the dependencies on CDI):

 <dependency>
     <groupId>org.activiti</groupId>
     <artifactId>activiti-engine</artifactId>
     <version>6.0.0.Beta2</version>
 </dependency>
 
 <dependency>
     <groupId>org.codehaus.groovy</groupId>
     <artifactId>groovy-all</artifactId>
     <version>2.4.5</version>
 </dependency>
 
 <dependency>
     <groupId>postgresql</groupId>
     <artifactId>postgresql</artifactId>
     <version>9.1-901-1.jdbc4</version>
 </dependency>

I’ve used groovy to output “Hello world” inside bpmn; hence we need its dependency.

Dropwizard

There’s nothing special here. I’ve just registered a REST resource and stored the reference to the Dropwizard configuration:

 @Override
 public void run(DwConfiguration config, Environment environment) throws Exception {
    ConfigurationHolder.set(config);
    environment.jersey().register(DwActivitResource.class);
 }

CDI producer

I’ve used a producer method for creating the Activiti Process Engine:

public class ProjectEngineFactory {
 
    @ApplicationScoped
    @StandaloneBinding
    @Produces
    public ProcessEngine getProcessEngine() {
       ProcessEngine processEngine = ProcessEngineConfiguration.createStandaloneProcessEngineConfiguration()
         .setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_FALSE)
         .setJdbcUrl(ConfigurationHolder.get().getActiviti().getJdbcUrl())
         .setJdbcUsername(ConfigurationHolder.get().getActiviti().getJdbcUsername())
         .setJdbcPassword(ConfigurationHolder.get().getActiviti().getJdbcPassword())
         .setJdbcDriver(ConfigurationHolder.get().getActiviti().getJdbcDriver())
         .setAsyncExecutorEnabled(true)
         .setAsyncExecutorActivate(false)
         .buildProcessEngine();
 
       return processEngine;
    }
}

REST resource

I’ve injected Activiti Process Engine, deployed test flow, run it and returned its id:

 @Path("activiti")
 public class DwActivitResource {
 
    @Inject
    @StandaloneBinding
    private ProcessEngine processEngine;
 
    @GET
    @Path("hello")
    @Produces(MediaType.TEXT_PLAIN)
    public String justTextMessage() {
 
       RepositoryService repositoryService = processEngine.getRepositoryService();
       RuntimeService runtimeService = processEngine.getRuntimeService();
 
       Deployment deployment = repositoryService
          .createDeployment()
          .addClasspathResource("helloWorldGroovy.bpmn20.xml").deploy();
 
       ProcessInstance processInstance = runtimeService
          .startProcessInstanceByKey("helloWorld");
       return processInstance.getId();
    }
    ...
}

Now when you call http://localhost:9999/v2/activiti/hello you should see the process instance id and “hello world” in the console.

That’s it, pretty simple. The entire code is on the Github: https://github.com/mpevec/dwtest-activiti.

Advertisement

From Zero to Hero: AngularJS Validation

Posted on

Today we’re going to talk about the front-end validation using AngularJS and Bootstrap. This will be illustrated by four separate examples:

–       usage of AngularJS build-in validation directives;

–       custom validation of a bank account with the AngularJS service;

–       username validation done after the onblur event;

–       group field validation.

Preparing a good stock

Like in cooking we need a good stock that will serve as our foundation for development.

Bower

What is “Bower” (https://github.com/bower/bower)? It is a front-end package manager. You can use it for the installation (download) of a specific package into your project, for example:

bower install bootstrap
bower install jquery
bower install angular

All installed libraries are then stored inside the folder /bower_components. Even your project can be defined as Bower Package by using the init command:

 bower init

As a result, your project dependency list will be created inside the bower.json file, which looks like this:

"dependencies": {
"angular": "~1.2.3",
    "bootstrap": "~3.0.2"
  }
}

So when you’re going to pack the whole project and push it for example to the GitHub, you can push it without dependencies (packages). If anyone wants to use this project (after obtaining it from GitHub), then one should only run the following command:

bower install

This will start downloading all necessary dependencies defined in bower.json.

Another useful feature is the registration of your (GitHub) project as a bower package. That means that everyone can use the command:

bower install your_project

Your project is then installed automatically with all dependencies defined in bower.json.

Bootstrap

The next ingredient in our stock is Bootstrap. It is a front-end framework for responsive web development. In our case we will need its advanced grid system and CSS styling. Basically, it means that we’ll use a prepared set of appropriately styled HTML elements (http://getbootstrap.com/css/) for the form usage.

We are going to use the so-called Horizontal Form that uses predefined grid classes.

<form class=”form-horizontal”>
  <div class=”form-group | has-error | has-success”>
    <label for=”inputEmail” class=”col-sm-2 control-label”>Email: *</label>
    <div class=”col-sm-10”>
      <input type=”email” class=”form-control” id=”inputEmail”>
      <span class="help-block">Validation Error Message</span>
    </div>
  </div>
  <div class=”form-group>
    <div class=”col-sm-offset-2 col-sm-10”>
      <button type=”submit” class=”btn btn-default”>Submit</button>
    </div>
  </div>
</form>

The class form-group acts as a grid row, so there is no need to use a standard Bootstrap .row class.

We can also add the class has-error or has-success – depends on the result of validation.

Inside a row we are using two columns for positioning: one with the class col-sm-2 and one with the class col-sm-10. The first column is used for the input label with the class control-label whereas the second is used for the input itself and with the class form-control.

For the position of the submit button an offset is used (class col-sm-offset-2) and classes btn btn-default are used for button styling.

For any kind of messages (for example validation messages) that are bound to the input, a span with the class help-block is used.

Directive controllers

How can directives work together i.e. speak to each other? The answer is through a directive controller – this is a function assigned to the controller property of the directive. We can say that the directive controller represents its API to the other directives. We use the require property of the directive to gain a controller and then it is accessible as the last parameter in the link function of the directive:

require: “passwordVerify”

where “passwordVerify” is the name of the directive whose controller we want to access. How to assemble a name for the required property is demonstrated in the book: Brad Green & Shyam Seshadri, AngularJS, page 134, table 6.7.

We can also access more controllers at once using the bracket notation:

require: ["passwordVerify", "ngModel"]

In the latter case, the array of controllers is passed to the link function as a parameter and then we can access a specific controller in a usual way (ctrl[0], ctrl[1] etc.).

But why do we talk about directive controllers in the context of validation ? Because in the custom directive (used for validation) we are going to speak to the directive ng-model and its controller NgModelController. The reason hides in the method $setValidity(..) which is called to mark a validation error.

Isolated Scope

Because the validation in AngularJS is done with directives, we have to understand how scope works in relation to them. Usually, you want to access the scope to watch model values etc. There are three options of getting a scope in the directive using the scope property of the directive.

By default you get an existing scope (scope: false) from your directive’s DOM element. You can also get a new scope (scope: true) and you can also get an isolated scope (scope: {..}), which is usually used with reusable components and validation.

In case of an isolated scope you can pass data from and to parent scope based on the binding strategies. Let’s look at an example.

<body ng-app="exampleModule">
<form class="form-horizontal" ng-controller="exampleController">
    <div class="form-group">
       <label class="col-sm-2 control-label" for="inputExample">Some label: *</label>
       <div class="col-sm-10">
           <input required type="text" name="name" id="inputExample" 
               class="form-control"
               ng-model="message"
               my-message-as-expr-param="getMessage(param1)"
               my-message-as-expr="getMessage()"
               my-message-as-string="{{message}}"
               my-message="message" >
       </div>
   </div>
</form>

In the above section we see custom directives all starting with “my-message“. We are going to show how to pass data from and to the parent scope (scope of an input element).

First of all we define an Angular controller, its scope variable message and two scope methods getMessage() and getMessage(param) – the first one returns a message and with the second one a message is changed to an upper case.

var exampleModule = angular.module("exampleModule", []);
exampleModule.controller("exampleController", ['$scope', function ($scope) {
    $scope.message = 'example message';
    
    $scope.getMessage = function() {
        return $scope.message;
    }
    $scope.getMessage = function(param) {
        if(param) return $scope.message.toUpperCase();
        return $scope.message;
    }
}]);

Now we have to create a myMessage (my-message) directive:

exampleModule.directive("myMessage", ['$parse', function($parse) {
  return {
    require: "ngModel",
    scope: {
        myMessage: '=myMessage',
        myMessageAsString: '@myMessageAsString',
        myMessageAsExpr: '&myMessageAsExpr',
        myMessageAsExprParam: '&myMessageAsExprParam'
    },
    link: function(scope, element, attrs, ctrl) {
        //example message
        console.log("-- ctrl view value: " + ctrl.$viewValue);
        console.log("-- myMessage: " + scope.myMessage);
        console.log("-- myMessageAsString: " + scope.myMessageAsString);
        console.log("-- myMessageAsExpr: " + scope.myMessageAsExpr());
        console.log("-- myMessageAsExprParam: " + 
                        scope.myMessageAsExprParam({'param1': true}));
        
        //other ways of ‘example message’:
        console.log("-- successfully evaluated value of the attribute myMessage: " + 
                       $parse("myMessage")(scope));
        console.log("-- string value of the attribute myMessage: " + attrs.myMessage);
    }
  };
}]);

By using require we gain the ngModelController that represents the API of the directive for ng-model used on the input element. This controller is then injected as the last parameter in the link function. We did that only to access the $viewValue property of the controller.

As you can see, we are using an isolated scope with four different strategies (actually three, but the last one with a parameter):

  • myMessage: ‘=myMessage’ (data binding of the isolated scope property myMessage with the property of the attribute my-message, in our case a parent scope property message);
  • myMessageAsString: ‘@myMessageAsString’ (passing a String value of attribute my-message-as-string to the isolated scope property myMessageAsString);
  • myMessageAsExpr: ‘&myMessageAsExpr’ (passing a function getMessage() to the isolated scope property myMessageAsExpr, that can be called from the directive);
  • myMessageAsExprParam: ‘&myMessageAsExprParam’ (passing a function getMessage(param1) to the isolated scope variable myMessageAsExprParam, that can be called later but in this case with a parameter);

Remark: we could use a shorter notation for the binding strategies, e.g. myMessage: ‘=’. We can do that if the names of properties are the same on both sides. 

We’re also showing two other ways of outputting our message. An injected $parse service can be used for parsing an isolates scope property myMessage. And we can use directly the attrs parameter of the link function.

Organization of the code

Even though this is a small Angular project, we group our code by functional areas, so that we can observe which objects are related or have dependencies. In our case the functional areas are validation examples.

For each area we also create a manifest file with all dependencies, which looks like as follows:

angular.module("dish4", ['ngRoute']);
angular.module("dish4").config(['$routeProvider', function ($routeProvider) {
    $routeProvider.
        when('/dish4', {
            controller: 'Dish4Controller',
            templateUrl: 'app/dish4/dish4.tpl.html'
        })
}]);
angular.module("dish4").controller("Dish4Controller", ['$scope', Dish4Controller]);
angular.module("dish4").directive("password", PasswordDirective);

In this way we can figure out very quickly what is going on. Implementations are stored in separate files.

Menu 1: Angular build-in validation directives

First of all, we are going to show how to use Angular build-in validation directives ng-required, ng-minlength, ng-pattern. We are only going to show a few segments of the final code. It can be downloaded from GitHub (https://github.com/mpevec/blog/tree/master/js):

<form class="form-horizontal" name="f1" ng-submit="submit()">
    <div class="form-group" ng-class="{'has-error': f1.inputTaxNumber.$dirty && 
                                                    f1.inputTaxNumber.$invalid,
                                       'has-success': f1.inputTaxNumber.$dirty && 
                                                      f1.inputTaxNumber.$valid}">

As you can see, every input has some properties that can help us with validation: $dirty ($pristine), $valid, $invalid. Meanings are self-explanatory.

<input class="form-control" type="text" id="inputTaxNumber" name="inputTaxNumber"
    ng-required="true"
    ng-minlength="8"
    ng-pattern="/^[0-9]+$/"
    ng-model="dish.taxNumber">
<span class="help-block" 
      ng-show="f1.inputTaxNumber.$dirty && 
               f1.inputTaxNumber.$error.required">Required.</span>
<span class="help-block" 
      ng-show="f1.inputTaxNumber.$dirty && 
               f1.inputTaxNumber.$error.minlength">Minimal length is 8.</span>
<span class="help-block" 
      ng-show="f1.inputTaxNumber.$dirty && 
               f1.inputTaxNumber.$error.pattern">Only numbers allowed.</span>

We can very easily validate a required tax number: it must be a valid number with the minimum length of 8 digits. The validation is made as soon as the user enters the data (checking the $dirty flag).

Menu 2: Custom validation with Angular service

Now we are going to implement a bank account validation using the Angular service. It will be called from a custom directive created just for this validation case.

Our service is very simple: it has only one method which checks if the bank account starts with “12345”:

this.validate = function(number) {
    if(number.substring(0, 5) == "12345") {
        return true;
    }
    return false;
}

The input field is similar as before, we just add a custom directive “bank-account”:

<input class="form-control" type="text" id="inputBankAccount" name="inputBankAccount"
    ng-required="true"
    ng-minlength="15"
    ng-maxlength="15"
    ng-pattern="/^[0-9]+$/"
    bank-account
    ng-model="dish.bankAccount">
<span class="help-block"
      ng-show="f1.inputBankAccount.$dirty && 
               f1.inputBankAccount.$error.required">Required.</span>
<span class="help-block" 
      ng-show="f1.inputBankAccount.$dirty && 
               f1.inputBankAccount.$error.minlength">Minimal length is 15.</span>
<span class="help-block" 
      ng-show="f1.inputBankAccount.$dirty && 
               f1.inputBankAccount.$error.maxlength">Maximum length is 15.</span>
<span class="help-block" 
      ng-show="f1.inputBankAccount.$dirty && 
               f1.inputBankAccount.$error.pattern">Only numbers allowed.</span>
<span class="help-block" 
      ng-show="f1.inputBankAccount.$dirty && 
      f1.inputBankAccount.$error.bankAccount">Invalid bank account.</span>

And the directive is implemented like this:

return {
    restrict: 'A',
    require: "ngModel",
    scope: {},
    link: function(scope, element, attrs, ctrl) {
        ctrl.$parsers.push(function(viewValue) {
        if(viewValue) {
           if (BankAccountService.validate(viewValue)) {
              $setValidity('bankAccount', true);
              return viewValue;
           }
           else {
              // it is invalid, return undefined (no model update)
              $setValidity('bankAccount', false);
              return undefined;
           }
        }
    });
}

We require the controller of ng-model directive, which is responsible for validation. We use its API i.e. method $setValidity to set the validation.

We add a new validation function to the end of $parsers array, which contains all the validation methods. In this way, we can be sure that our custom (expensive) validation will be called only at the end of the validation chain and not before.

Menu 3: Username validation after onblur

Let us imagine the next scenario. We have to validate the username at the back-end. Validation can take some time and in between the user should be blocked. And because of the expensive validation, it should only be made after the onblur event.

The solution closely resembles the previous case. There is the input field:

<input class="form-control" type="text" id="inputUsername" name="inputUsername"
    ng-required="true"
    ng-minlength="5"
    ng-model="dish.username"
    username="hasFocus"
    ng-blur="hasFocus=false"
    ng-focus="hasFocus=true">
<span class="help-block" 
      ng-show="f1.inputUsername.$dirty && 
               f1.inputUsername.$error.required">Required.</span>
<span class="help-block" 
      ng-show="f1.inputUsername.$dirty && 
               f1.inputUsername.$error.minlength">Minimal length is 5.</span>
<span class="help-block" 
      ng-show="f1.inputUsername.$dirty && 
               f1.inputUsername.$error.username && 
               !f1.inputUsername.$error.checking">Invalid username.</span>
<span class="help-block" 
      ng-show="f1.inputUsername.$dirty && 
               f1.inputUsername.$error.checking">Checking email...</span>

We use a build-in directives ng-blur, ng-focus, which allows us to set the “hasFocus” scope variable accordingly. In the username directive we use the isolated scope and bind this scope variable to it. Then we can watch it inside the directive and call the validation:

return {
    restrict: 'A',
    require: "ngModel",
    scope: {
        hasFocus: "=username"
    },
    link: function(scope, element, attrs, ctrl) {

        $watch('hasFocus', function(hasFocus) {
            if(angular.isDefined(hasFocus)) {

                // on blur
                if(!hasFocus) {
                    ctrl.$setValidity('checking', false);
                    UsernameService.validate(ctrl.$viewValue)
                        .then(function(resolvedData) {
                            if(resolvedData) {
                                ctrl.$setValidity('checking', true);
                                ctrl.$setValidity('username', true);
                            }
                            else {
                                ctrl.$setValidity('checking', true);
                                ctrl.$setValidity('username', false);
                            }
                        }
                     );
                }
                
                // on focus
                else {
                    ctrl.$setValidity('username', true);
                    ctrl.$setValidity('checking', true);
                }
            }
        });
    }
}

We also operate with the custom “checking” property for two reasons:

– we want to display a text message while the user is waiting for validation to finish;

– we want to block the user until validation is complete.

As you can see, we also remove any validation messages when an onfocus event occurs, just to improve the UI experience.

In our example, we don’t call the back-end, but we simulate it with the help of the $timeout service:

function UsernameService($timeout) {
    this.validate = function(username) {
    return $timeout(function() {
        if(username.substring(0, 5) == "12345") {
            return true;
        }
        return false;
    }, 5000);
}

In this way we can be sure that the validation will last 5 seconds.

Menu 4: Group field validation

In our last example we are going to show how to make a typical group field validation for setting a new password. In this scenario we have two fields that must be validated together, and its values must be equal:

<input class="form-control" type="text" id="inputPassword" name="inputPassword"
    ng-required="true"
    ng-minlength="5"
    ng-model="dish.password">
..
<input class="form-control" type="text" id="inputPasswordR" name="inputPasswordR"
    ng-required="true"
    ng-model="dish.passwordr"
    password="dish.password">

We define the directive as follows:

return {
    restrict: 'A',
    require: "ngModel",
    scope: {
        password: '='
    },
    link: function(scope, element, attrs, ctrl) {
   
        scope.$watch(function() {
            var combined;
            if (scope.password || ctrl.$viewValue) {
                combined = scope.password + '_' + ctrl.$viewValue;
            }
            return combined;
        }, 
        function(value) {
            if (value) {
                if (scope.password !== ctrl.$viewValue) {
                    ctrl.$setValidity("passwordVerify", false);
                } 
                else {
                    ctrl.$setValidity("passwordVerify", true);
                }
            }
        });
    }
}

We have two values that must match. The first one is from the input and is accessible with ctrl.$viewValue. The second one is from the isolated scope property password and holds the value of the first input. With $watch function we can make the validation only when at least one of the values has changed.

Once again the entire code can be found on GitHub: https://github.com/mpevec/blog/tree/master/js/validation.

How lean are you ?

Posted on Updated on

The purpose of this blog post is to explore lean startup practices within a smaller project called “Povezovalec”. I am really looking forward to present all findings, lessons learned etc. on this blog. Explaining and refining an accumulated knowledge is the main goal of this project.

Let’s start with explaining basics. So what is a startup?

According to Steve Blank,[1] it is a search for a scalable, repeatable business model. This is a perfect definition. Why?

Validated learning loop

“Everyone gets hit by ideas when they least expect them… Most people ignore them – entreprenours choose to act on them” (Running Lean, Ash)

At the beginning we have an initial vision of our product that is shared with our colleagues and friends. But this vision must be documented, and as the above saying implies, we must act on it.

The phrase “lean startup” suggests that we start with capturing a business model hypothesis; moreover, Zach Nies, a CTO at Rally Software, suggests an iteration of the following steps:

Hunch => Hypothesis => Build => Measure & Test => Validated Learning

1.) we form a hypothesis, which is used to

2.) build some artifacts (mockups, code, landing page…) only for the purpose of

3.) measuring or testing the hypothesis through customer feedback, for example by doing interviews. Results are then used for

4.) learning, at which stage the hypothesis is either proved or rejected.

This is called a validated learning loop, or shorter, an experiment, that is nothing more than Eric Ries Build-Measure-Learn cycle[2].

In order to be more successful, some ground rules need to be set (based on [3]).

Hypothesis must be testable (measurable)

Objective conditions under which a hypothesis can be rejected or proved must be defined upfront. For example: A required field for a tax number on a form will drive away every third customer.

We have to chase an optimal learning loop with an appropriate dos of all three spices: Speed, Focus, Learning. 

Learning with focus but without speed will result in running out of resources.

Learning with speed but without focus will result in premature optimization (that is root of all evil, based on Adam Bien).

Speed with Focus but without Learning will result in being unproductive.

Make accessible dashboards about learning and business data

Running a startup primarily on faith is not enough; it is essential to measure everything and to show results on a dashboard continuously.

Plan A

So what do we use to document our experiments? Ash Maurya[3] is writing about the so-called “Plan A”. At the beginning, it contains our initial gueses – a hypothesis that has not been tested yet. And as you can guess, the plan A gets more and more improved trough the iteration of experiments. Don’t forget, if a startup is a search process, then our first idea is probably wrong. It is only part of the journey (that is the first realization of an entrepreneur).

Can the Plan A be a traditional business plan? While everybody talks about a good business plan, we must mention Steve Blank[4] and his statement:

“Business Plan: A document investors make you write that they don’t read”. 

In my opinion, this is the first big shift from the traditional point of view of a startup, especially in the Slovenian environment. where we are brainwashed daily about the importance of an upfront prepared and static business plan. Sure, a big block of papers with big letters “Business Plan” sounds and looks very nice, but it contains too many untested guesses. It is way too rigid. It can very fast become a big block of a failure. Since most Plan As are going to fail anyway, we sure need something better, more effective.

So the answer to our question is lean canvas. The concept was first used by Ash Mauyra and it is an adapted Business Model Canvas [5]. It looks like this:

Lean Canvas_blank

Based on [3], I would like to highlight most important differences compared to a traditional business plan.

Lean canvas is fast

There is no need to write 60 pages of a business plan. At this stage, all the project essentials can be thrown onto a piece of paper in a single afternoon. And yes, it is ok to leave some segments blank.

Lean canvas is concise

We are forced to choose our words carefully. This helps us anyway, because we must grab the attention of investors in 30 seconds and that of customers in 8 seconds ([3], pg. 19)

Lean canvas is dynamic

Due to its conciseness, it is much easier to change a lean canvas based on validated learning during iterations of experiments

Stages of a startup

After we finish the Plan A with initial guesses, we should test it through a set of well-defined experiments decided on between two stages (out of three) of our startup.

Stage1: Problem-solution fit

First, there are two variables that need to be decoupled: a problem and a solution.

The problem interview

In my opinion, testing a problem is extremely under-emphasized. It is necessary to have a positive answer to the following two questions: “Is our product worth solving?” and “Can our problem be solved ?”  Therefore, our hypothesis must be about understanding a problem. A formed artifact helps us to articulate a problem, whereas learning and measurement is done through interviews concerning the defined problem.

The solution interview

After we have defined a problem, we are in the best position to find a possible solution. We make a demo only to help customers to visualize the solution. So our artifact is a demo, and learning and measurement is done through interviews regarding the solution.

Type of validation

Validation of the hypothesis is done qualitatively. At the beginning, we don’t have enough data to make statistically significant decisions. Jakob Nielsen and Steve Krug proved that five testers are enough to uncover 85% of the problems. Thus, we can shift from this message and assume that five customer interviews will be enough to make bold decisions about our hypothesis.

Focus

Our focus centers on validated learning and pivots (a change in direction of a startup). We have to make bold decisions instead of making incremental improvements (e.g. rather than changing a style of a button we change the entire form).

Stage2: Product – Market fit

At this stage we have already validated that we have a problem worth solving. Moreover, we provide a solution through a demo.

The MVP interview

All the information and data that we have gathered up to this point brings us to the next artifact called Minimum Viable Product (MVP), a product  having the minimum set of essential features. This stage is about learning and measurement through interviews about MVP. We test how well our solution solves the problem.

Type of validation

Any hypothesis after the MVP interview is tested quantitatively, meaning that we are gaining traction with our customers. If we are doing it right, we are signing up customers and also getting paid. That is why I said that this is the first significant milestone of a startup.

Focus

We focus on incremental improvements of MVP. At this stage we learn how to create a better product-market fit for a product.

Stage3: Scale

What happens after achieving the product-market fit? We reach the final stage –”scale”. Here our goal is simply growth – a goal also aligned with the goal of investors; hence this is the ideal time to raise funds.

Focus

Our focus shifts towards growth and optimization. It is all about accelerating the plan that we have formed within the framework of the problem-solution fit.

——————

1 Steve Blank, The Four Steps to the Epiphany: Successful Strategies for Products that Win, 2005

2 Eric Ries, The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses, 2011

3 Ash Maurya, Running Lean: Iterate from Plan A to a Plan That Works (Lean Series), 2012

4 Steve Blank, The Startup Owner’s Manual: The Step-By-Step Guide for Building a Great Company, 2012

5 Alexander Osterwalder, Yves Pigneur, Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers, 2010

Arquillian in testiranje storitev REST

Posted on

Arquillian je orodje za integracijsko testiranje komponent. V najbolj osnovni obliki je odgovor na vprašanje: kako naj stestiram nek EJB v njegovem okolju ? Članek ni namenjen osnovam Arquilliana, te lahko spoznate na:  http://jaxenter.com/arquillian-a-component-model-for-integration-testing.1-35304.html, ampak je namenjen prikazu testiranja:

storitve REST, ki je implementirana nad Stateless EJBjem, in glede na vhodni parameter poišče (preko JPA) določeno entiteto in jo vrne.

Za testiranje (za odjemalca) bomo uporabili knjižnico RESTEasy (http://www.jboss.org/resteasy/), za pretvorbo JSON v Java objekte in obratno, pa bomo uporabili knjižnico Jackson (http://jackson.codehaus.org).

Sam Arquillian pa bo konfiguriran v načinu Glassfish 3.1 Managed, več o tem na https://docs.jboss.org/author/display/ARQ/GlassFish+3.1+-+Managed.

Implementacija storitve REST

Da je REST sploh omogočen, moramo definirati t.i. Application Path:

@ApplicationPath("/resources")
public class RestApplication extends Application {}

nato lahko implementiramo storitev, sicer s pomočjo Stateless EJBja:

@Stateless
@Path("/alert")
public class TestResource {

    @Inject
    @PuAdminWH
    EntityManager entityManager;

    @GET
    @Path("/{id}")
    @Consumes("text/plain")
    @Produces("application/json")
    public AppJmsAlert getAlertById(@PathParam("id") int id) {

        //Tole lahko vrne NoResultException, glej Exception mapper        AppJmsAlert alert = ...koda za poizvedbo...

        return alert;
    }
}

Vidimo, da smo implementirali klic tipa GET, ki sprejme en parameter id tipa text in vrne JSON prezentacijo entitete JPA tipa AppJmsAlert, torej naš klic bo izgledal takole:

http://localhost:8080/.../resources/alert/id/1453

V primeru, da vhodni parameter ne vrne rezultata (ne vrne entitete JPA), se bo avtomatsko na strežniku sprožil NoResultException, odjemalcu pa v takem primeru želimo vrniti odgovor s statusom HTTP tipa BAD_REQUEST. Zato moramo implementrati t.i. Exception Mapper:

@Provider
public class NoResultExceptionMapper implements ExceptionMapper<NoResultException> {

    @Override
    public Response toResponse(NoResultException e) {

        return Response.status(Response.Status.BAD_REQUEST).entity(e.getMessage()).build();
    }
}

Konfiguracija orodja Arquillian

Na lokacijo src/test/resources ustvarimo dve datoteki:

 arquillian.launch in arquillian.xml

ki nam predstavljata konfiguracijo. V launch datoteko bomo vpisali t.i. Container Qualifier, ki določa, katera konfiguracija, določena v arquillian.xml, je aktualna ob zagonu testov. Zato bo njena vsebina naslednja:

remote-gf

Arquillian.xml pa ima naslednjo vsebino:

<?xml version="1.0" encoding="UTF-8"?>
<arquillian xmlns="http://jboss.org/schema/arquillian"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

    <container qualifier="remote-gf" default="true">
       <configuration>
          <property name="glassFishHome">
               /Applications/NetBeans/glassfish-3.1.2.2
          </property>
       </configuration>
    </container>
</arquillian>

vidimo, da je potrebno definirati pot do kontejnerja EE, v tem primeru Glassfish, katerega bo Arquillian zagnal, v njega namestil testni war in zagnal teste.

To pa ni vse, sledi najtežji del, sicer pravilna nastavitev pom.xml-ja:

<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.8.1</version>
    <scope>test</scope>
</dependency>

<dependency>
    <groupId>org.jboss.arquillian.junit</groupId>
    <artifactId>arquillian-junit-container</artifactId>
    <version>1.0.3.Final</version>
    <scope>test</scope>
</dependency>    

<dependency>
    <groupId>org.jboss.resteasy</groupId>
    <artifactId>resteasy-jaxrs</artifactId>
    <version>2.3.4.Final</version>
    <scope>test</scope>
</dependency>

<dependency>
    <groupId>org.jboss.resteasy</groupId>
    <artifactId>resteasy-jackson-provider</artifactId>
    <version>2.3.4.Final</version>
    <scope>test</scope>
</dependency>

<dependency>
    <groupId>org.jboss.shrinkwrap.resolver</groupId>
    <artifactId>shrinkwrap-resolver-api-maven</artifactId>
    <version>2.0.0-alpha-1</version>
    <scope>test</scope>
    <type>jar</type>
</dependency>    

<dependency>
    <groupId>org.jboss.shrinkwrap.resolver</groupId>
    <artifactId>shrinkwrap-resolver-impl-maven</artifactId>
    <version>2.0.0-alpha-1</version>
    <scope>test</scope>
    <exclusions>
        <exclusion>
            <groupId>com.google.collections</groupId>
            <artifactId>google-collections</artifactId>
        </exclusion>
    </exclusions>
</dependency>

Vidimo, da smo na classpath dali naslednje stvari:

  • junit in arquillian container;
  • implementacijo resteasy in jackson;
  • shrinkwrap resolver, katerega si bomo pogledali kasneje.

Dodatno bomo definirali maven profile, v katerem pa imamo naslednjo definicijo:

<profiles>  
    <profile>
        <id>glassfish-managed-3.1</id>
        <dependencies>
            <dependency>
                <groupId>com.inteligoo</groupId>
                <artifactId>intelicommon</artifactId>
                <version>1.0</version>

                <!--
                  Ker spodaj include.dam
                -->
                <exclusions>
                    <exclusion>
                        <groupId>javax</groupId>
                        <artifactId>javaee-api</artifactId>
                    </exclusion>
                    <exclusion>
                        <groupId>org.slf4j</groupId>
                        <artifactId>slf4j-api</artifactId>
                    </exclusion>
                    <exclusion>
                        <groupId>org.slf4j</groupId>
                        <artifactId>slf4j-log4j12</artifactId>
                    </exclusion>
                </exclusions> 
           </dependency>    
           <dependency>
               <groupId>org.jboss.arquillian.container</groupId>
               <artifactId>arquillian-glassfish-managed-3.1</artifactId>
               <version>1.0.0.CR3</version>
               <scope>test</scope>
               <exclusions>
                   <exclusion>
                       <groupId>org.jboss.arquillian.container</groupId>
                       <artifactId>arquillian-container-spi</artifactId>
                   </exclusion>  
                   <exclusion>
                       <groupId>com.sun.jersey</groupId>
                       <artifactId>jersey-bundle</artifactId>
                   </exclusion>
                   <exclusion>
                       <groupId>com.sun.jersey.contribs</groupId>
                       <artifactId>jersey-multipart</artifactId>
                   </exclusion>
             </exclusions>
         </dependency>

         <!--
           Nadomestim z novejsimi, sicer imam error:
org.jboss.arquillian.container.spi.client.protocol.metadata.ProtocolMetaData.getContexts(Ljava/lang/Class;)Ljava/util/Collection;
         -->
         <dependency>
             <groupId>org.jboss.arquillian.container</groupId>
             <artifactId>arquillian-container-spi</artifactId>
             <version>1.0.3.Final</version>
             <scope>test</scope>
         </dependency>

         <!--
           Nadomestim z novejsimi, sicer imam error:
             com.sun.jersey.api.client.ClientHandlerException:

A message body writer for Java type, class com.sun.jersey.multipart.FormDataMultiPart, and MIME media type, multipart/form-data, wasnot found
          -->       
         <dependency>
             <groupId>com.sun.jersey</groupId>
             <artifactId>jersey-bundle</artifactId>
             <version>1.12</version>
             <scope>test</scope>
         </dependency>

         <dependency>
             <groupId>com.sun.jersey.contribs</groupId>
             <artifactId>jersey-multipart</artifactId>
             <version>1.12</version>
             <scope>test</scope>
         </dependency>      

         <!--
           Sicer mam error:
             Absent Code attribute in method...
         -->
         <dependency>
             <groupId>org.jboss.spec</groupId>
             <artifactId>jboss-javaee-6.0</artifactId>
             <version>1.0.0.Final</version>
             <type>pom</type>
             <scope>provided</scope>
         </dependency>      
     </dependencies>

     <build>
         <testResources>
             <testResource>
                 <directory>src/test/resources</directory>
             </testResource>
             <testResource>
                 <directory>src/test/resources-glassfish-managed</directory>
             </testResource>
         </testResources>
      </build>
   </profile>
</profiles>

POZOR! Obvezno je potrebno podrobno pregledati zgornji odsek kode, namreč v komentarjih so opisani vsi problemi, na katere sem naletel. Zato je potrebno uporabiti tudi nekaj exclusion-ov, da se stvar sploh hoče zagnati.

Priprava testov

Da lahko testiramo našo storitev s RESTEasy, moramo implementirati t.i. razred stub, ki bo samo ogrodje za našo storitev:

@Path("/alert")
public interface TestResourcesClientStub {

    @GET
    @Path("/{id}")
    @Consumes("text/plain")
    @Produces("application/json")
    public AppJmsAlert getAlertById(@PathParam("id") int id);  

}

Ogrodje bomo uporabili v naših testih Arquillian:

@RunWith(Arquillian.class)
@RunAsClient

public class TestResourceTest {   

    private static final String RESOURCE_PREFIX = RestApplication.class.getAnnotation(ApplicationPath.class).value().substring(1);

    /**
    * Resource, ki vrne npr.: http://localhost:8080/test/ v primeru, da deployamo test.war
    * V primeru Glassfish Embeddable, nam ta resource ne vrne prave vrednosti
    */
    @ArquillianResource
    private URL url;

    @Deployment(testable=false)
    public static WebArchive deploy(){   

        //0.) Za dodajanje poljubnih artifaktov iz POMa v obliki knjiznic
        final MavenDependencyResolver resolver = DependencyResolvers.use(MavenDependencyResolver.class);

        //1.) Izdelam arhiv
        WebArchive war = ShrinkWrap.create(WebArchive.class, "test.war");

        //2.) NASTAVITVE
        war.addClasses(SystemConfiguration.class, PuAdminWH.class, RestApplication.class)

        //ENTITETE
           .addPackage(AppJmsAlert.class.getPackage())   
           .addClasses(KpiReportHelper.class, KpiSettings.class)

        //REST CLASSI
           .addClasses(NoResultExceptionMapper.class, TestResource.class)

        //VSE POTREBNE KNJIZNICE: HIBERNATE + JACKSON (pretvorba objektov v JSON)
           .addAsLibraries(resolver.artifacts("org.hibernate:hibernate-core:4.1.0.Final", "org.hibernate:hibernate-entitymanager:4.1.0.Final", "org.jboss.resteasy:resteasy-jackson-provider:2.3.4.Final").resolveAsFiles())

       //BAZA
           .addAsResource("test-persistence.xml", "META-INF/persistence.xml")

       //CDI   
           .addAsWebInfResource(EmptyAsset.INSTANCE, ArchivePaths.create("beans.xml")); 

         return war;  
    }

    /**
    * Inicializacija REST EASY + JACKSON (pretvorba iz JSONa v objekte)
    */
    @BeforeClass
    public static void initResteasyClient() {

        ResteasyProviderFactory instance = ResteasyProviderFactory.getInstance();
        instance.registerProvider(ResteasyJacksonProvider.class);
        RegisterBuiltin.register(instance);
    }

    /**
    * Testiram v primeru, ko mi storitev vrne alert s pričakovanim IDjem
    *
    */
    @Test
    public void testIdExists() {
        int idExists = 490;
        
        //resteasy client
        TestResourcesClientStub client = ProxyFactory.create(TestResourcesClientStub.class,  url.toString() + RESOURCE_PREFIX);

        AppJmsAlert alert = client.getAlertById(idExists);
        Assert.assertEquals(idExists, alert.getIdAppJmsAlert().intValue());

    }

    /**
    * Testiram response storitve, ko alert z določenim idjem ne obstaja
 
    *
    */
    @Test
    public void testBadResponse() {

        int idDoesntExists = 41290;

        //resteasy client
        TestResourcesClientStub client = ProxyFactory.create(TestResourcesClientStub.class,  url.toString() + RESOURCE_PREFIX);

        try {
            AppJmsAlert alert = client.getAlertById(idDoesntExists);
        }
        catch(ClientResponseFailure e) {

            // to je ok
            if(e.getResponse().getResponseStatus().equals(Response.Status.BAD_REQUEST)){
                return;
       }
    }
    Assert.fail("No bad response for id=" + idDoesntExists);
  }
}

Pomebnejše stvari so naslednje:

  • Anotacija @ArquillainResource nam vrne del URLja, na katerem zaganjamo našo testno aplikacijo;
  • RESOURCE_PREFIX nam vrne osnovni del URLja za REST, v našem primeru je to niz po vrednosti “resources”;
  • test bomo izvajali kot odjemalec (@RunAsClient);
  • paket war po imenu “test.war”, v katerem bo naš resurs REST, ki ga testiramo, se izdela s pomočjo Maven Resolverja, ki nam omogoča, da dodamo v naš war vse pakete, neposredno iz maven repozitorija;
  • test-persistance.xml je popolnoma enak persistance.xml, saj namen tega testa ni priprava ustrezne testne baze, ampak bomo uporabili kar obstoječo bazo. Splošno to ne velja in je potrebno pripraviti tudi ustrezno testno bazo;
  • v test.war je potrebno dodati vse razrede, ki jih bomo uporabljali v testu;
  • @BeforeClass anotacija nam označuje metodo, v kateri definiramo odjemalca RESTEasy, s podporo za knjižnico JACKSON;
  • implementirali smo dva testa, sicer v prvem testu pričakujemo, da nam storitev vrne točno določeno entiteto s točno določenim IDjem, v drugem testu pa pričakujemo, da nam storitev vrne BAD_REQUEST, saj poizvedbo pripravimo tako, da namenoma damo napačen id kot parameter.

Pri zagonu našega testa, se tako zgradi paket test.war, se zažene Glassfish 3.1, v katerega se omenjeni paket namesti, in kot odjemalec se zaženeta oba testa. Sami testi niso nič posebnega, najtežji del je žal konfiguracija vseh potrebnih knjižnic (pom.xml), saj privzete knjižnice, kot so navedene na domači strani Arquilliana žal ne delujejo out-of-the-box.

Vidimo tudi, da smo definirali pot do resursov za naše teste (src/test/resources-glassfish-managed) v katerem imamo test-persistance.xml, za našo testno okolje.

Pomodoro na menuju

Posted on

Slejkoprej pride znotraj ekipe do izgube fokusa, naveličanosti in posledično manjše efikasnosti in to kljub jasno postavljenim ciljem, komitmentu posameznikov, daily meetingom SCRUM itn. Ker je potrebno strmeti nenehnemu izboljšanju procesa, smo v ekipi poskusili t.i. tehniko “pomodoro”.

Kot pravi Wiki (http://en.wikipedia.org/wiki/Pomodoro_Technique) gre za tehniko upravljanja s časom, ki zagotavlja uporabnikom optimalni fokus in svežino, posledično se naloge hitreje dokončajo z manj utrujenosti. To se naredi tako, da se s pomočjo ure meri 25 minutne intervale, v katerih se zbrano in brez prekinitev izvaja reševanje nalog. Po pretečenih 25 minutah, vzamejo sodelujoči  kratek 4 minutni odmor in nato ponovijo vajo. Na vsake toliko pretečenih ciklov pomodoro, se vzame daljši odmor.

Omenjeno tehniko uporabljamo v ekipi zadnje 3 mesece in rezultati so navdušojoči. V manj časa opravimo bistveno več dela. Osebno uporabljam tehniko tudi doma, pri opravilih, kjer je potrebna velika mera zbranosti.

Pa še velik fun je..

photo-1

Slika: prikaz opravljenih 4 (od 10) ciklov pomodoro na white boardu.

Java EE in asinhrono izvajanje

Posted on Updated on

V tokratnem zapisu si bomo pogledali, kako izdelati “cron job”, ki bo izbrano množico nizov (to so enote dela) optimalno razpršil med vnaprej določeno število vzporednih in asinhronih izvajanj tj. workerjev.

Naloga workerja bo ta, da bo enostavno izpisal enoto dela oz. podani niz. Ko bodo vsi workerji zasedeni, bo job čakal na prvega prostega in ob končanju izvajanja določenega workerja, avtomatsko zagnal novega.  Razprševanje dela, pa se bo izvajalo enkrat na 15 sekund.

@Asynchronous

Kot vemo nam Java EE 6 neposredno ne dovoli uporabljati večnitnosti, saj za nitnost skrbi kontejner EE (in njegov Threading Pool). Ampak session beans (EJB) lahko implementirajo asinhrone metode. Metode postane asinhrone, če jih označimo z anotacijo @Asynchronous.

Velja naslednje, da bean, ki kliče tako metodo, dobi kontrolo nazaj takoj po klicu in predenj se metoda sploh začne izvajati. Dodatno, asinhrona metoda lahko vrne implementacijo java.util.concurrent.Future<V>, preko katere lahko klicajoč bean preverja, kaj se dogaja z asinhronim izvajanjem, npr. ali je izvajanje še aktivno itd.

Glede izvajanja pa bo kontejner EE bo poiskal novo nit v svojem bazenu niti in ji predal izvajanje. Celo več, v skladu s EJBji in propagacijo transakcij, se bo to izvajanje zgodilo znotraj iste transakcije. Če želimo kreirati novo transakcijo, lahko to naredimo s ustrezno anotacijo:

@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)

Izdelava workerja

Torej potrebujemo session bean (stateless), ki bo implementiral asinhrono metodo. Ta metoda bo vrnila implementacijo java.util.concurrent.Future<String>, zato moramo za naš session bean definirati tudi view, sicer local view. Namreč velja pravilo, da če želimo, da nam asinhrona metoda karkoli vrne, ne smemo uporabiti session bean brez view-a:

public interface AsyncWorker {
    public Future<String> work(String unit);
}
@Stateless
@Local(AsyncWorker.class)
public class DSLWorker implements AsyncWorker{

    @Asynchronous
    public Future<String> work(String unit) {
        System.out.println("…obdelujem enoto: " + unit);

        String status = "END";
        return new AsyncResult<String>(status);
    }
}

Zgoraj vidimo, da na koncu metode, vrnemo status po vrednosti “END” in to lahko po potrebi uporabimo v klicajočem beanu.

Izdelava cron joba

Prav tako potrebujemo session bean (stateless), ki pa bo zaradi potrebe po izvajanju vsake 15 sekund uporabil anotacijo @Schedule:

@Stateless
public class CronJobFacade {
     private int numOfExecutions = 3;

     @EJB 
     private AsyncWorker aworker;

     @Schedule(second="*/15", minute="*",hour="*", persistent=false)
     public void dispatch() {
         //SEZNAM ENOT DELA:
         List<String> list = new ArrayList<String>();
         list.add("A");
         list.add("B");
         list.add("C");
         list.add("D");

         //asinhrona izvajanja workerjev
         List<Future<String>> workers = new ArrayList<Future<String>>();

         for (String unitFromList : list) {
             //IMAMO PREVEČ WORKERJEV, ČAKAM DA SE VSAJ EDEN SPROSTI
             if(workers.size() > (numOfExecutions-1)) {

                 break_lvl_1:
                 while(true) {
                     for (Future<String> specificWorker : workers) {
                         if(specificWorker.isDone()) {
                             workers.remove(specificWorker);
                             break break_lvl_1;
                         }
                      } 
                 }
             }
             //DISPATCH
             Future<String> worker = aworker.work(unitFromList);
             workers.add(worker);

         }//for
     }
}

Zgoraj vidimo, da imamo seznam dela, ki ga moram asinhrono opraviti preko asinhronih klicev. Vnaprej določeno število vzporednih izvajanj (workerjev) nam onemogoča, da kreiramo toliko novih workerjev, kot je vseh enot dela. Namreč, ko dosežemo omejitev, začnemo v zanki čakati, da se določeno izvajanje konča. To naredimo s periodičnim preverjanjem oz klicem metode isDone() and implementacijo java.util.concurrent.Future<String>. Ta delo logike, bi lahko izvedli tudi kako drugače, vendar je prikazan lean način.

V resničnem življenju imamo kakšne bolj resne primere, kot je izpis nizov, npr.: worker kliče spletno storitev (JAX-WS) in njen rezultat pošlje na JMS vrsto in na tak način potem dosežemo paralelno klicanje storitev.

Izvorna koda za podan primer je dosegljiva na:

https://github.com/mpevec/blog/tree/master/ee-cron-async

Programabilni inject CDI s parametrom

Posted on Updated on

EJB 3 ni kontekstualen

Mehanizem dependecy injection (v nadaljevanju DI), ki ga je uvedla tehnologija EJB3 je preprost, vendar ni kontekstualen. Zato imamo na voljo  specifikacijo CDI (Context and Dependcy Injection), ki nam poleg t.i. typesafe DI omogoca tudi koncept konteksta.

V tem blogu ne bom pisal o osnovah CDIja, ampak o bolj naprednem konceptu, katerega sem uporabljal pri svojih projektih in za katere menim, da je premalo informacij na spletu.

Programabilni DI

Recimo, da imamo naslednji java interface:

public interface DealDefinitionChoice {
    public void setDefinitionChoice(Object definitionChoice);
}

in naslednjo implementacijo

public class DealDefinitionChoiceImplementation implements 
DealDefinitionChoice, Serializable {

    private Choice choice;

    @Override
    public void setDefinitionChoice(Object definitionChoice) {
        this.choice = (Choice)definitionChoice;
    }

    @Override
    public Choice getDefinitionChoice() {
        return choice;
    }
}

Velja, da se DI zgodi po kreiranju instance (v katerem je DI uporabljen) in predenj se pokliče metoda anotirana s @PostConstruct:

@Named
@SessionScoped
public class DealDefinitionChoiceTest implements Serializable {

    @Inject
    private DealDefinitionChoice choice;

    @PostConstruct
    public void init() {
        …
        Choice choice = choice.getDefinitionChoice();
    }
}

Pomeni, da se bo DI zgodil po klicu konstruktorja razreda DealDefinitionChoiceTest in rezultat DIja bomo lahko uporabili v metodi init().

Opomba: zgornje bo delovalo samo, če bomo imeli eno implementacijo DealDefinitionChoice. V primeru več implementacij, bi s CDI Qualifierjem določili, katero implementacijo želimo uporabiti pri DI.

Kaj pa če želimo izvesti najprej neko logiko in šele po izvedbi izvesti DI ? To lahko naredimo s programabilnim DI:

@Named
@SessionScoped
public class DealDefinitionChoiceTest implements Serializable {

    @Inject @Any
    private Instance<DealDefinitionChoice> choice;

    @PostConstruct
    public void init() {
       …
       Choice choice = choice.get().getDefinitionChoice()
    }
}

v zgornjem primeru se DI zgodi pri klicu get(), potrebno pa je bilo nastaviti placeholder v obliki variable choice. Uporabili smo tudi anotacijo @Any, ki nam omogoča, da placeholderju povemu, da lahko sprejme katerokoli implementacijo DealDefinitionChoice, s tem da bomo ob programabilnem DIju povedali točno katero. V tem primeru imamo samo eno implementacijo in zato ni problemov.

Programabilni DI s parametrom

Kaj pa če želimo izvesti DI po izvedbi neke logike, kjer rezultat izvedbe logike uporabimo pri samem DIju ? Pomeni, da je DI odvisen od nekega zunanjega parametra in ta parameter je ključen za izvedbo DIja ? Najlažje bomo to prikazali s uporabo t.i. producerja:

public class DealChoiceFactory {

    @Inject
    private DealDefinitionChoice choice;

    @Produces
    @DealDefinitionChoiceFinalQualifier
    public DealDefinitionChoice getChoice(InjectionPoint injectionPoint) {

        String purchaseType = null;

        for (Annotation annotation : injectionPoint.getQualifiers()) {
            if (annotation instanceof ParamDealDefinitionChoiceFinalQualifier) {
                purchaseType = ((ParamDealDefinitionChoiceFinalQualifier) annotation).getPurchaseType();
                break;
            }
        }

        choice.setDefinitionChoice(evaluate(purchaseType);
        return choice;
    }

    private Choice evaluate(String purchaseType) {
       …

       if(purchaseType.equals("…") return ...

    }
}

zgoraj imamo implementacijo metode producerja, kjer s pomočjo instance InjectionPoint pridemo do parametra String purchaseType, ki ga potem uporabimo pri določanju implementacije. Rezultat produciranja zapišemo v DealDefinitionChoiceImplementation in jo vrnemo. Instanca InjectionPoint nam tako omogoča, da se programabilno vrnemo na mesto, kjer se je zgodil DI in tako pridemo do parametra, ki pa je “shranjen v anotaciji”.

Potrebno je vedeti tudi to, da instanco InjectionPoint lahko uporabljamo samo v primeru t.i. psevdo-scopa. Namreč, v drugih scopih ta mehanizem ne more delovati, zaradi uporabe t.i. proxy-ev (glej http://docs.jboss.org/weld/reference/1.0.0/en-US/html/scopescontexts.html).

Sedaj se pojavi vprašanje, kako naj spravimo parameter do producerja, namreč mehanizem producerjev, nam to ne omogoča. Trik: pomagamo si lahko s “shranjevanjem v anotacijo”.

Najprej uvedemo CDI Qualifier, ki nam bo enolično označil producer:

@Qualifier
@Retention(RUNTIME)
@Target({METHOD, FIELD, PARAMETER, TYPE})
public @interface DealDefinitionChoiceFinalQualifier {}

nato pa uvedemo poseben CDI Qualifier, ki nam služi samo za “shranjevanje v anotacijo”:

public class ParamDealDefinitionChoiceFinalQualifier extends 
AnnotationLiteral<DealDefinitionChoiceFinalQualifier> implements 
DealDefinitionChoiceFinalQualifier {

    private String purchaseType;

    public String getPurchaseType() {
        return purchaseType;
    }

    public ParamDealDefinitionChoiceFinalQualifier(String purchaseType) 
        this.purchaseType = purchaseType;
    }
}

S tem ko smo rekli, da razred extenda AnnotationLiteral, smo ga označili kot CDI Qualifier.

Programabilni inject s parametrom pa sedaj izgleda takole:

@Named
@SessionScoped
public class DealDefinitionChoiceTest implements Serializable {

    @Inject @Any
    private Instance<DealDefinitionChoice> choice;

    @PostConstruct
    public void init() {
        …
        String purchaseType = ...

        DealDefinitionChoice dealChoice = choice.select(new ParamDealDefinitionChoiceFinalQualifier(purchaseType)).get();

    }
}

na tak način smo programerju v celoti skrili implementacijo kreiranja ustreznega choice.a, ki je odvisno od dinamično podanega parametra in DI se zgodi takrat, ko programer to želi.

2012 ?

Posted on

Novoletne resolucije

Evo, I’m back.. ena od novoletnih resolucij je tudi ta, da začnem ponovno in redno pisati blog. Danes bo en kratek, sicer ravno o tematiki novoletnih resolucij.

Odločanje, odločanje…

Pred dnevi sem bral en dober blog (http://www.reallifee.com/) na to temo. Če na kratko povzamem, si v kontekstu novoletnih resolucij moramo odgovoriti na naslednja vprašanja:

1.) Česa si ne želim več početi sploh ?

2.) Za katere stvari želim porabiti manj časa ?

3.) Za katere stvari si ustvarjam nepotreben pritisk ?

4.) S katerimi stvarmi osrečujem druge, vendar samemu sebi škodujem ?

5.) Za katere stvari si več ne želim, da bi se izboljšal ?

Sam sem si že pripravil odgovore, kaj pa vi ?

 

Preprosto preskušanje entitete JPA 2 z Netbeans 7 in Maven

Posted on

Java EE 6 je stara dobro leto dni, NetBeans 7 Beta je zunaj že kar nekaj časa, čez dva meseca bo zunaj končna različica, Maven pa pridobiva na priljubljenosti in je dobro integriran v današnje IDE, tudi v Netbeansu. Zato sta tokrat naša cilja naslednja:

  • ustvariti nov projekt Maven s pomočjo Netbeansov, ki bo preskušal na novo ustvarjeno entiteto (JPA2) prek ogrodja JUnit
  • prikazati integracijo med Netbeansi in Mavenom

Uporabili bomo podatkovno bazo PostgreSQL, aplikacijskega strežnika pa za ta namen ne bomo potrebovali. Vse skupaj bomo zaganjali na Windows XP.

Mogoče se kdo sprašuje, kakšen je smisel uporabe JUnit-a nad poljubno entiteto, saj so v njej samo metode tipa setter/getter. Vendar, ker za tovrstno testiranje ne potrebujemo transakcij JTA (torej ne potrebujemo aplikacijskega strežnika) in ker uporabljamo nad podatkovno bazo strategijo “drop and create”, lahko nad obravnavano entiteto brez velikega vložka dela preskušamo vsaj poizvedbe JPQL.

1. Priprava okolja za Maven

Uporabili bomo Apache Maven 3.0.1. Maven je bundlan v Netbeanse, vendar bomo v našem primeru uporabili ločeno inštalacijo. Maven lahko snamete tule: http://maven.apache.org/download.html in ga odpakirate v poljuben direktorij, npr. f:/apache-maven-3.0.1. Pri sami inštalaciji je pomebno to, da nastavimo njegov bin v spremenljivko PATH. Tedaj lahko preverimo, ali je vse v redu:

Po definiciji je Maven rešitev za buildanje projektov (in še veliko več …). Pri svojem delu uporablja datoteko POM – to je datoteka XML, v kateri je opredeljen naš projekt. Projekt običajno potrebuje več t. i. artifaktov – zunanjih modulov, od katerih je odvisen. To so t. i. dependencies, ki so prav tako opredeljene v datoteki POM. Ker je tudi sam projekt nek artifakt, rečemo, da je Maven orodje za upravljanje artifaktov.

Artifakti se sprva nahajajo v oddaljenih repozitorijih in se med buildanjem projekta običajno  prenesejo v lokalni repozitorij. Zato Maven potrebuje tudi povezavo do spleta. Privzeta nastavitev za lokalni repozitorij je /Documents and Settings/User/.m2/repository. Ta je v tem trenutku še prazen.

Ker Mavena ne bomo uporabljali command line, moramo v Netbeansih nastaviti določene parametre. Znotraj Tools => Options => Miscellaneous, zavihek Maven, nastavimo pot do nameščenega Mavena in pot do lokalnega repozitorija:

2. Priprava podatkovne baze

Uporabili bomo PostgreSql, kjer bomo ustvarili novo podatkovno bazo po imenu MyAppTest. Test smo dali v ime zato, ker bomo ob zaganjanju aplikacije v tej bazi sproti brisali in ustvarjali tabele.

3. Ustvarjanje projekta

Gremo v Netbeanse in izberemo New Project, kjer potem izberemo Maven => Java Application.

Nato izpolnimo naslednje parametre:

Tako ustvarimo nov projekt  glede na archetype: maven-archetype-quickstart:1.1. Prav tako se v lokalni repozitorij naloži nekaj osnovnih potrebnih modulov (npr. maven plugin api). Sedaj smo dobili osnovno strukturo projekta:

Vidimo, da je projekt razdeljen na več delov: del, v katerem so izvorne datoteke, del v katerem so testi, del za (Test) Dependencies in del za datoteko POM, ki v tem trenutku vsebuje naslednje:

<groupId>blog.milanpevec</groupId>
 <artifactId>MyApp</artifactId>
 <version>1.0-SNAPSHOT</version>
 <packaging>jar</packaging>

 <name>MyApp</name>
 <url>http://maven.apache.org</url>

 <properties>
   <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
 </properties>

 <dependencies>
   <dependency>
     <groupId>junit</groupId>
     <artifactId>junit</artifactId>
     <version>4.8.2</version>
     <scope>test</scope>
   </dependency>
 </dependencies>

Vidimo, da je edini artifakt junit, namreč ta modul bomo potrebovali za preskušanje. Dodatno pri artifaktu popravimo različico na vrednost 4.8.2, kajti Netbeansi kot privzeto uporabljajo različico 3.

Zaradi uporabe JPA2 dodamo v datoteko še naslednje vrstice:

<build>
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-compiler-plugin</artifactId>
      <version>2.3.2</version>
      <inherited>true</inherited>
      <configuration>
        <source>1.6</source>
        <target>1.6</target>
      </configuration>
    </plugin>
  </plugins>
</build>

S tem povemo, naj Maven v življenjskem ciklu prevajanja (compile) uporabi plugin za prevajanje Java SE različice 1.6. V nasprotnem bi se nam kasneje pri delu z JPA2 pojavljala opozorila.

4. Izdelava entitete JPA2

Izberemo File => New => Persistance => Entity Class. Obrazec izpolnimo z naslednjimi podatki:

Nastavili smo, da se entiteta po imenu Auser ustvari znotraj paketa blog.milanpevec.entities. Ostalo smo pustili privzeto. Pomembno je to, da spodaj obkljukamo “Create Persistance Unit”, da lahko v naslednjem koraku hkrati opredelimo še PU:

Za t. i. persistance providerja izberemo EclipseLink, izberemo novo povezavo do baze (ali jo ustvarimo, če je še nimamo) in izberemo strategijo “Drop and Create”, saj hočemo, da se ob preskušanju vedno pobriše shema in se na novo ustvarijo bazni objekti.

Zaradi integracije Mavena z Netbeansi so sedaj v datoteki POM samodejno opredeljeni novi artifakti, od katerih je odvisen naš projekt:

<dependency>
 <groupId>org.eclipse.persistence</groupId>
 <artifactId>eclipselink</artifactId>
 <version>2.2.0-M4</version>
</dependency>
<dependency>
 <groupId>org.eclipse.persistence</groupId>
 <artifactId>javax.persistence</artifactId>
 <version>2.0.0</version>
</dependency>

Gre za dva artifakta, ki sta uvrščena v isto skupino (org.eclipse.persistance). Artifakt Eclipselink je definiran zaradi PU, javax.persistance pa zaradi entitete JPA2. Ker bo PU uporabljal JDBC, moramo tudi zanj dodati artifakt:

<dependency>
 <groupId>postgresql</groupId>
 <artifactId>postgresql</artifactId>
 <version>9.0-801.jdbc4</version>
</dependency>

Ker omenjenih artifaktov še nimamo v lokalnem repozitoriju, nam Netbeansi prikažejo opozorila:

Zgornje lahko rešimo na več načinov. Lahko bi sprožili build in bi nam takrat naložilo vse potrebne artifakte. Lahko pa iz kontekstnega menija našega projekta izberemo “Show and resolve problems”, “Download Dependencies”:

Vendar je pred tem treba povedati, kje v oddaljenem repozitoriju se nahaja artifakt eclipselink. Netbeansi nam v datoteko POM privzeto nastavijo naslednje:

<repositories>
 <repository>
   <url>ftp://ftp.ing.umu.se/mirror/eclipse/rt/eclipselink/maven.repo</url>
   <id>eclipselink</id>
   <layout>default</layout>
   <name>Repository for library Library[eclipselink]</name>
 </repository>
</repositories>

Toda zaradi FTP-ja lahko pride do težave. Zato zgornjo vrstico nadomestimo z naslednjim:

<url>http://www.eclipse.org/downloads/download.php?r=1&amp;nf=1&amp;file=/rt/eclipselink/maven.repo</url&gt;

Sedaj Netbeansi sami naložijo manjkajoče artifakte v lokalni repozitorij.

Da bi bilo lažje slediti, kaj se dogaja med postopkom preskušanja, dodamo v PU novo lastnost (znotraj datoteke persistance.xml). Želimo, da se nam izpisujejo vse poizvedbe nad bazo. Zato dodamo lastnost eclipselink.logging.level z vrednostjo FINE:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
 <persistence-unit name="blog.milanpevec_MyApp_PU" transaction-type="RESOURCE_LOCAL">
   <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
   <class>blog.milanpevec.entities.Auser</class>
   <properties>
     <property name="javax.persistence.jdbc.url" value="jdbc:postgresql://localhost:5432/MyAppTest"/>
     <property name="javax.persistence.jdbc.password" value="postgres"/>
     <property name="javax.persistence.jdbc.driver" value="org.postgresql.Driver"/>
     <property name="javax.persistence.jdbc.user" value="postgres"/>
     <property name="eclipselink.ddl-generation" value="drop-and-create-tables"/>
     <property name="eclipselink.logging.level" value="FINE"/>
   </properties>
 </persistence-unit>
</persistence>

Iz zgornjega je razvidno, da bomo uporabili transakcije RESOURSE_LOCAL, saj bomo preskušanje zaganjali mimo aplikacijskega strežnika.

Pozor! EclipseLink potrebuje eksplicitno določena imena entitet, s katerimi upravlja, zato moramo dodati zgoraj znotraj <class> še ime entitete. V nasprotnem primeru dobimo “JPA exception: Object: … is not a known entity type.”

5. Izdelava testa

Najprej v našo entiteto dodamo dve lastnosti: firstname in lastname ter določimo, da primarni ključ tabele določa sekvenca auser_id_auser_seq. Prav tako opredelimo t. i. named query, ki poišče vse entitete Auser v podatkovni bazi. To poizvedbo bomo kasneje tudi preskusili.

@Entity
@NamedQuery(name=Auser.findAll,query="SELECT a FROM Auser a")
public class Auser implements Serializable {

 public final static String findAll = "entities.Message.findAll";    
 private static final long serialVersionUID = 1L;

 @Id
 @GeneratedValue(generator = "ContSeq")
 @SequenceGenerator(name = "ContSeq", sequenceName = "auser_id_auser_seq", allocationSize = 1)   
 private Long id;    
 private String firstname;    
 private String lastname;
 
 public Long getId() {
   return id;
 }
 public void setId(Long id) {
   this.id = id;
 }    
 public String getFirstname() {
   return firstname;
 }
 public void setFirstname(String firstname) {
   this.firstname = firstname;
 }
 public String getLastname() {
   return lastname;
 }
 public void setLastname(String lastname) {
   this.lastname = lastname;
 }
}

Sedaj pa si poglejmo, kako bomo sestavili test za našo entiteto. Napisali bomo testno metodo (anotacija @Test), ki se zažene samodejno zaradi omenjene anotacije. Znotraj nje bomo preskusili shranjevanje entitete v podatkovno bazo in edino opredeljeno poizvedbo. Uporabili bomo fixture @Before, ki označuje metodo, ki se zažene pred vsakim testom. V njej bomo implementirali pridobivanje trenutne transakcije. Dodatno bomo uporabili fixture @BeforeClass in @AfterClass, s katerim bomo ustvarili in zaprli naš EntityManager:

public class AuserTest  {
 
 private static EntityManagerFactory emf; 
 private static EntityManager em; 
 private static EntityTransaction tx; 
 
 @BeforeClass 
 public static void initEntityManager() throws Exception { 
   emf = Persistence.createEntityManagerFactory("blog.milanpevec_MyApp_PU"); 
   em = emf.createEntityManager(); 
 } 
 
 @AfterClass 
 public static void closeEntityManager() throws SQLException { 
   em.close(); 
   emf.close(); 
 } 
 
 @Before 
 public void initTransaction() { 
   tx = em.getTransaction(); 
 } 
 
 @Test 
 public void createAuser() throws Exception { 
 
   // Creates an instance of auser
   Auser auser = new Auser(); 
   auser.setFirstname("Janez");
   auser.setLastname("Novak"); 
 
   // Persists the auser to the database 
   tx.begin();  
   em.persist(auser); 
   tx.commit(); 
   assertNotNull("ID should not be null", auser.getId()); 
 
   // Retrieves all users from the database 
   List<Auser> ausers = em.createNamedQuery(Auser.findAllAusers).getResultList(); 
   assertEquals(1, ausers.size()); 
 } 
}

Uspešno izvedeno preskušanje ima naslednji izhod:

Zgoraj smo tako pokazali, da je mogoče zelo hitro in preprosto pripraviti okolje za preskušanje entitet. Če bi kdo želel celotno kodo projekta, mi lahko piše. Do naslednjič …

Vloga managementa pri iterativnem razvoju

Posted on

Vemo, da je vsaka posamezna iteracija videti kot en mini projekt. S stališča programerja – razvoja – vsebuje vse aktivnosti RADIT (Requirements, Analysis, Design, Implementation, Test). Kaj pa s stališča managementa? Kakšna je njegova vloga pri iterativnem razvoju?

Ponavadi se razmišlja v naslednji smeri:

  • management samo zaustavlja birokracijo (oz. neke ovire), da ne pride do razvojne ekipe, ta pa dejansko (edina) naredi celotno delo;
  • management skrbi samo za budget in za spremljanje načrta dela;
  • management dejansko ni potreben;

Takšno razmišljanje ponavadi privede do neuspeha. Management mora storiti več.
Razmišljamo lahko dvoravensko. Na nižji ravni mora management poskrbeti, da bo vsaka iteracija imela:

  • jasne cilje (koliko novih funkcionalnosti bo razvitih, koliko manj bugov glede na prejšnjo iteracijo bo proizvedenih, koliko nalog iz t. i. risk list bo rešenih ipd.). Glej tudi OPOMBO 1.
  • merljive kriterije za oceno uspešnosti
  • ekipo, ki se zaveže, da bo dosegla cilje
  • definiran začetek in konec ter njena umestitev v razvoj celotnega projekta
  • začrtane dejavnosti glede na resurse
  • ocenjevanje uspešnosti ob njenem koncu

OPOMBA 1: Česar nismo omenili zgoraj (ker je samoumevno), je naslednje: osnovni cilj vsake iteracije je release – delujoča različica programske opreme. Ta je v osnovni obliki lahko samo t. i. Proof of Concept, lahko je delujoč prototip, različica beta ipd.

Zgornje bi lahko ponazorili s ciklom: Agree – Execute – Assess.

Na višji ravni pa mora management prevzeti vlogo leadershipa. Tako mora:

  • poskrbeti mora za dolgoročno osredotočanje in usmerjanje ekipe. Tu ne gre za usmerjanje znotraj posamezne iteracije, ampak za usmerjanje glede na celoten projekt. Razvoj ne sme potekati po naključni poti. Treba je vedeti, koliko resursov bo še potrebno do konca in kdaj bodo ti resursi potrebni.

To lahko ponazorimo s ciklom: Monitor – Control Direction – Focus. Velja, da se oba omenjena cikla ponavljata nenehno skozi celotni razvoj.

Zaradi opisane pomembnosti managementa lahko njegovo dejavnost dodamo med iterativne dejavnosti. Zato ne govorimo več o dejavnostih RADIT, ampak o dejavnostih MRADIT (Management, Requirements, Analysis, Design, Implementation, Test).