Monday, December 5, 2016

Swagger and Flask Tutorial

Introduction

  • API documentation is important.
  • Coding to Interfaces and Separation of Concerns remain fundamental design patterns
    • Any project involving a great deal of source code and multiple developers will likely fail without proper attention to this design
  • Exposing Functionality is important
    • This conveys and demonstrates progress 
    • In a proper SOA design, functionality is exposed as a series of internal business processes
  • Business Processes are APIs
    • By using Swagger to document the APIs we get an auto-generated user interface that both documents and helps a (somewhat technical) user explore the dialog functionality
    • Any Tester or Business Analyst role should have the skill to navigate a Swagger-generated API, view the documentation, and explore a variety of I/O possibilities



API Documentation in Swagger

You can download swagger locally, but I find it just as easy to use the remote site.

Once there, you can create your markup.  There are some useful tutorials in the references section.

Once you've defined your markup, then use the Generate Server option:
Generate a Python Flask app from the YAML markup
This generates a Python Flask application from the YAML markup.


Fixing the Generated Code

There were a few manual fixes I had to make consistently each time I regenerated the flask app.


  1. Each method was generated into a separate python file (<name>_controller.py).  There was a casing issue that I had to correct. 

    The swagger.yaml file had this
    operationId:
    "controllers.Occurences_controller.ups_global_get"

    and I had to change that to this:
    operationId:
    "controllers.occurences_controller.ups_global_get"

    in order to correspond to the actual casing of the python file.
  2. Likewise, the generated code was initially configured to run the swagger UI on port 80 and the operations on port 8080. 

    This caused some trouble, as the endpoint operation couldn't figure out which port to use when.

    It became a lot easier when I changed this line from
    host:
    "localhost"
    to
    host:
    "localhost:5000"
    and alter app.py to run on 5000 as well:
    app.run(port=5000)
  3. The generated code needs some changing:

 I'm using Python 2.7.12 :: Anaconda 4.1.1 (x86_64) on OS X 10.11.6 (15G1108).



Running the App

By going to this URL

I saw this screen


I could then expand operations, find the single operation I had documented:



and execute a simple test.


References

  1. Swagger Tutorial
  2. Online Swagger Editor

Tuesday, October 11, 2016

Python and Mongo: Up and Running with docker-compose

Introduction

This tutorial walks through
  • Usage of docker-compose (v2 syntax) with the data volume persisted on the host (e.g. your laptop). 
  •  A python project created using cookiecutter 
    • and the Python/Mongo driver specified in the requirements.txt file for controlled and repeatable installation
  • Inserting and Querying data to and from Mongo
    • also terminating and re-starting the docker instance to demonstrate persistence beyond the container lifecycle



Environment

At the time of this article, I'm using
  • OS X 10.11.6
  • Docker version 1.12.2-rc1-beta27 (build: 12496) 
  • Python 2.7.12 |Anaconda 4.1.1 (x86_64)| (default, Jul  2 2016, 17:43:17)

Mongo in Docker

Copy this into docker-compose.yml
version: '2'
services:
  mongo:
    image: mongo
    ports: 
      - "27017:27017"
    volumes:
      - /Users/craigtrim/docker/volumes/mongobot:/data/db
Note that the use of docker-compose and a standard mongo image.  We do not have to create/edit an existing Dockerfile.  It's best to use trusted images.  Also note the path for /volumes. 

Copy this into run.sh
#!/usr/bin/env bash

# STEP 1: clean up past docker images
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker rmi $(docker images -q -f "dangling=true")
docker rm $(docker ps -q -f status=exited)

# STEP 2 (optional): deletes all docker images
if [ "$1" == "-c" ]; then
 docker rmi $(docker images -q)
fi

# STEP 3: build and run
docker-compose build
docker-compose up --remove-orphans --force-recreate

We will use this script to gracefully launch docker-compose.


The Python Project

I prefer to use cookiecutter to create new Python projects.
cookiecutter https://github.com/wdm0006/cookiecutter-pipproject

For developers with a JEE background, this is similar to using the Maven Archetype Generator (shown here for reference):
mvn archetype:generate -DgroupId=$1 -DartifactId=$2 -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false

After stepping through the cookiecutter questionnaire, I replace the contents of "requirements.txt" with
pymongo==3.3.0

I use this command to install the python project:
python setup.py install clean --all



The Python Code

 The python code looks like this:

if __name__ == "__main__":
    from pymongo import MongoClient

    client = MongoClient("mongodb://localhost:27017")

    db = client.primer

    from datetime import datetime

    result = db.restaurants.insert_one(
        {
            "address": {
                "street": "2 Avenue",
                "zipcode": "10075",
                "building": "1480",
                "coord": [-73.9557413, 40.7720266]
            },
            "borough": "Manhattan",
            "cuisine": "Italian",
            "grades": [
                {
                    "date": datetime.strptime("2014-10-01", "%Y-%m-%d"),
                    "grade": "A",
                    "score": 11
                },
                {
                    "date": datetime.strptime("2014-01-16", "%Y-%m-%d"),
                    "grade": "B",
                    "score": 17
                }
            ],
            "name": "Vella",
            "restaurant_id": "41704620"
        }
    )

    print (result.inserted_id)

    cursor = db.restaurants.find()
    for document in cursor:
        print(document)

The docker instance can be terminated.  When launched again, the data inserted in the first session will still be present.

Wednesday, February 24, 2016

Neo4j and Python on Ubuntu

Isolation

Isolate your development environment with Vagrant and Ubuntu.

Initialize a vagrant machine by copying this Vagrantfile into a directory:
Vagrant.configure("2") do |config|
  config.vm.provision "shell", inline: "echo Initiate Provisioning ..."

  config.vm.box = "ubuntu/trusty64"
  config.vm.network "public_network", bridge: [ 'en0: Wi-Fi (AirPort)', 'en1: Thunderbolt 1', 'en2: Thunderbolt 2', 'bridge0' ]
  
  config.vm.synced_folder("scripts", "/home/vagrant/scripts")

  config.vm.provider "virtualbox" do |v|
    v.gui = false
    v.memory = "4096"
    v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
    v.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
  end

  config.vm.define "neo" do |neo|
    neo.vm.hostname = "neo"
    neo.vm.network "private_network", ip: "192.168.35.42"
    config.vm.network :forwarded_port, guest: 7474, host: 7474
  end

end


On the terminal, navigate to the directory with the Vagrantfile and type:
vagrant up && vagrant ssh

You may need to change the profiles in the section config.vm.network "public_network" to reference network adapters on your machine. You may also wish to change the IP address specified in this section neo.vm.network "private_network".


Prerequisites

Java must be installed first


Neo4j Installation

Installation:
$ sudo apt-get update -y

$ sudo su
$ wget -O - http://debian.neo4j.org/neotechnology.gpg.key | apt-key add -
$ echo 'deb http://debian.neo4j.org/repo stable/' > /etc/apt/sources.list.d/neo4j.list
$ exit

$ sudo apt-get update -y
$ sudo apt-get install -y neo4j
The digital ocean blog referenced below gives detail about each command.  Note that I was sudo su for the purpose of the first two commands.

Once the commands have been executed, Neo4J should be running.

Verify installation success with the following command:
vagrant@neo:~$ service neo4j status
 * neo4j is running


Also via the web browser at:
http://172.31.99.190:7474/browser/