Tuesday, April 12, 2016

#Git starter cheat sheet

Initialize a local repo

Run the following command in the folder where you would like to initialize a git repo.

git init

Get status

It is a good practice to frequently run check on status during development. The following command points out the changes between the previous commit and current state of the folder.

git status

Add content

To add an untracked file named ‘text.txt’ to the staging area, execute the following command.

git add text.txt

Syntax: git add <filename>

Commit changes

To commit changes made to the folder, execute the following command. The message will be used as a commit message to associate this check-in with the message.

git commit –m “Add text.txt to the code base.”

Syntax: git commit –m “<Commit message>”

Add using wild card

To add multiple files using a wild card character, execute the following command.

git add ‘*.txt’

Syntax: git add ‘<wildcard_character+string>’

Check history

Review commit history using the following command.

git log

Add Remote repository

To push the local repo to the remote git server, we need to add a remote repository.

git remote add origin http://<remote_host>/<repo_name>/<repo+name>.git

Pushing remotely

The push command tells Git where to put our commits. The name of our remote is origin and the default local branch name is master. The -u option allows git to remember the parameters for subsequent pushes.

git push -u origin master

Pulling remotely

Pull the latest changes from the remote repository to your local repository. All changes since your last push will be pulled down to the local repo.

git pull origin master

Identifying differences

The following command shows the differences between our last commit and the current state of the remote repo. HEAD refers to our recent commit.

git diff HEAD

Staged differences

The diff command can also be used to identify changes within the files that have already been staged.

git diff --staged

Resetting the staged additions

Sometimes, we may choose to un-stage changes that have not been pushed to the repo.

git reset <repo_name>/<file_name>


Files can be changed back to the way they were at the last commit. When this command is executed, any files that were not committed will be removed from the local folder.

git checkout -- <filename_to_be_removed>

Creating a branch

git branch <branch_name>

Display branches

View list of all branches using the following command.

git branch

Switching branches

To switch to a branch, execute the following.

git checkout <branch_name>


Delete one or more files from the branch or master.

git rm ‘*.txt’
git rm text.txt

Committing branch changes

git commit –m “<message_text>”

Switching back to master

To merge or copy your changes made in the branch to your master, first switch over to the master branch.

git checkout master

Perform merge

Prepare to merge changes from branch to master.

git merge <branch_name_to_copy_from>
git push

Clean up branch

git branch –d <branch_name_to_cleanup>

List all remotes

git remote -v

Revert local changes

git checkout .
git reset

Remove untracked files

git clean -f

Remove untracked files and directories

git clean -fd

Sunday, April 10, 2016

@Docker cheat sheet

Installing Docker

Installing Docker on Ubuntu

sudo apt-get update
sudo apt-get install –y docker.io
sudo service docker status
docker -v
docker version
sudo service docker start
sudo docker info

Installing Docker on CentOs

yum install -y docker
systemctl status docker.service
systemctl start docker.service

Updating Docker

Add docker repo key to the local apt keychain

wget =q0- https://get.docker.com/gpg | apt-key add -

Add docker repo to apt sources

echo deb http://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install lxc-docker

Basic docker configuration

Viewing Docker socket

ls -l /run

Add user to docker group

sudo gpasswd -a vagrant docker
cat /etc/group

Configure docker deamon on a Ubuntu host to listen on a network port

netstat -tlp
service docker stop
docker -H -d &
netstat -tlp

Connect to docker Ubuntu host from centOs machine

Set env variable
export DOCKER_HOST=”tcp://”

Configure docker deamon on an Ubuntu host to listen on a network port and a unix socket

docker -H -H unix:///var/run/docker.sock -d &

Docker images

Pulling a Docker image

docker pull ubuntu
docker pull <image_name>:<tag_name>

Pulling all docker images

docker pull -a ubuntu
Note: Images are stored in /var/lib/docker/<storage driver>


Running a Docker container

Running an interactive Docker container

docker run -it ubuntu /bin/bash

List of running containers

docker ps

List of recently running containers

docker ps -a

Exit a container without killing it

Ctrl + p + q

Docker run with -d (detached)

docker run –d ubuntu /bin/bash -c “ping -c 30”
<-c implies command>

Docker run with -d to restart unless stopped

docker run --restart=unless-stopped -d /bin/bash -c “ping -c 30”

Set restart always on existing docker container

docker update --restart=always <container_id>

Update existing container to restart automatically

docker update --restart=always prime-numbers

Update existing container to not restart automatically

docker update --restart=no prime-numbers

Naming a docker container

docker run -it -v /test-vol --name=voltainer ubuntu:15.04 /bin/bash

Image layers

Display all layers for all images

docker images --tree

Location of images


Copying images to other hosts

docker commit <container_id> new_name
docker save -o $HOME/seacliffsand.tar seacliffsand

Starting up tar images

Peek inside the tar

tar -tf <tarfile>.tar

Import tar image

docker load -i <tarfile>.tar

docker attach

docker attach <container_id>

docker exec

docker exec –it <container_id> /bin/bash

Commands for working with containers

docker run -d

docker run -d –name=<container_name> <image_name>

docker run –d … -c (pass a shell command to bash shell of the container)
docker run -d ubuntu /bin/bash -c “ping -c 30”
docker top to see the top running processes in a container
docker top <container_id>

cpu shares

docker run --cpu-shares=256

Assign specific amount of memory to a docker container

docker run -m=1g -it <image_name> /bin/bash

docker inspect to get all info about a container or image

docker inspect

docker start stop restart


docker run -it ubuntu:14.04 /bin/bash



ctrl + p + q



docker ps
docker stop <container_id or name>


docker kill –s <posix_sig>

all containers run

docker ps -a

last container run

docker ps –l

start a closed container

docker start <container_id>
docker attach <container_id>

restart a running container

docker restart <container_id>

location of containers


deleting containers

docker rm <container_id or name>

delete a running container

docker rm -f <container_id or name>

Getting a shell in a container

nsenter – enter name space

docker inspect <container_id> | grep Pid
nsenter -m -u -n -p -i -t <Pid> /bin/bash

#to enter a running container
docker-enter <container_id>
docker exec (recommended way of getting a terminal inside container)
docker exec -it <container_id> /bin/bash

Building from a Dockerfile

Comment line – starts with a #

Example: Dockerfile

#Ubuntu based Hello World container
FROM ubuntu:15.04
RUN apt-get update
#Each RUN instruction creates a new layer
#To minimize the number of layers, combine the runs into fewer RUN lines
RUN apt-get install –y nginx
RUN apt-get install –y golang
CMD [“echo”, “Hello World”]

docker build command (-t = tag)

docker build -t helloworld:0.1 .
# dot (.) at the end implies build with the Docker file in pwd
docker build -t=”<tag>” .

Dockerfile ADD

# The ADD command is executed when the Docker image is being built. It is is not executed when the container is created.
# ADD - allows for the source to be a URL

Dockerfile COPY

# The COPY command is executed when the Docker container is created. It is is not executed when the container is being built.

Push images to docker hub

docker tag

docker tag <container_id> <username>/<reponame>:1.0
docker push <username>/<reponame>:1.0
#Enter username, password and email id for the docker hub account

docker history
docker history <container_id>
start the container of image built by docker build
docker run helloworld:0.1

Docker private registry

Starting a registry

docker run -d -p 5000:5000 registry
#The DNS name used to resolve our registry becomes a permanent part of the naming context of any repo that we push to our registry.

Accessing the private registry from browser


Using a private registry

Pushing an image to the private registry

docker tag <image_id> <hostname>:5000/<image_name>
docker push <hostname>:5000/<image_name>

Configuring docker config for allowing insecure communication with the private registry

#In /etc/default/docker of the machine hosting the private registry
DOCKER_OPTS=”--insecure-registry <hostname>:5000”

#In /usr/lib/systemd/system/docker.service on CentOs client
ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS --insecure-registry <hostname>:5000
#Restart docker on CentOs after making the above change

Docker registry config settings


Running an image hosted in the private registry

docker run -d <hostname>:5000/<image_name>

Diving deeper with Dockerfile

The build cache

docker build -t=”build1” .
docker build -t=”build2” .
#if the docker deamon finds images that were built with the same instruction as the current one from the Dockerfile, it does not repeat the build step again. Instead, it picks up the image from the cache. When operating at scale, this speeds up builds significantly.

Dockerfile and Layers

docker images --tree
docker history <image_layer_id>

Exercise: Building a web server Dockerfile

Dockerfile: Convoluted example

#Simple web server
FROM ubuntu:15.04
RUN apt-get update
RUN apt-get install -y apache2
RUN apt-get install -y apache2-utils
RUN apt-get clean
CMD [“apache2ctl”, “-D”, “FOREGROUND”]

Exercise: Run the web server container

docker run -d -p 80:80 webserver
docker ps
#connect from the browser: <hostname>:80

Reducing number of layers

Dockerfile: To reduce layers
#Simple web server
FROM ubuntu:15.04
RUN apt-get update && \
apt-get install -y apache2 && \
apt-get install -y apache2-utils && \
apt-get clean
CMD [“apache2ctl”, “-D”, “FOREGROUND”]

The CMD instructions

#CMD executes only at runtime, run commands in containers at launch time. Only one CMD per dockerfile. If there are more, only the last one will be effective.
#RUN is a build-time instruction, adds layers to images, used to install apps.
#CMD shell form - Commands are expressed the same way as shell commands. Shell commands are automatically prepended by ‘/bin/sh –c’ if arguments are passed to the CMD.

#CMD Exec form – Pass arguments to CMD as json – [“command”, “arg1”]. Allows to work with containers that do not have shell. No shell features like variable expansion and no special characters.

The ENTRYPOINT instructions

#Preferred method of specifying the default app to run in the container. Cannot be overridden at runtime with normal commands.
#Any commands passed to “docker run … <command>” will be used as arguments to ENTRYPOINT
#In Dockerfile
#Execute build and run commands
docker build –t=”hw2”
docker run hw2 Hello World
#the following passes /bin/bash as an argument to the hw2 container’s echo ENTRYPOINT.
docker run –it hw2 /bin/bash

#In Dockerfile for apache ENTRYPOINT
#Execute build and run commands (fires up the apache web server on the container)
docker build –t=”web2”
docker run –d –p 80:80 web2 –D FOREGROUND

To override ENTRYPOINT at runtime

--entrypoint on the docker run command line

The ENV instructions

#Creating environment variables to the container
ENV var1=example1 var2=example2
#Using environment variables in the Dockerfile
ENV var1=ping var2=
CMD $var1 $var2


Create a volume on the container

docker run -it -v /test-vol --name=voltainer ubuntu:15.04 /bin/bash

Host mount: Map a local directory to a directory on the container

docker run -p 8080:8080 -v $HOME/jenkins_home:/var/jenkins_home jenkins:1.596.2 &

Volumes in a Dockerfile

#Host mounts are not possible from Dockerfile

FROM ubuntu:15.04
RUN apt-get update && apt-get install -y iputils-ping
VOLUME /data

Deleting a volume

docker rm -v <container>
#If we delete a container without specifying a -v, the container gets deleted, but the volume remains.

Docker Networking

#See what is on the network
ip a
#docker 0 is a network bridge or a virtual switch
#bridge utils is required for viewing whatever is running on docker 0
#apt-get install bridge-utils
#yum install bridge-utils
brctl show docker 0


docker run -it --name=net1 net-img
docker run -it --name=net2 net-img
#Each container gets one interface automatically attached to docker 0 bridge

#Inside our container
ip a
#eth0 with a inet address


docker run -it --name=net2 net-img
docker inspect <container_id>
#See NetworkSettings for IPAddress, Gateway, bridge, etc.

ls -l /var/lib/docker/containers/<container_id>
#we are interested in hosts and resolv.conf
cat resolve.conf
#By default resolve.conf is a copy of /etc/resolve.conf on the Docker host
#We can override the resolve.conf by passing arguments on the docker run command line

cat hosts
#With recent versions of docker, it is allowed to change hosts and resolve on the fly.

Exercise to override resolve.conf:

docker run –dns= –name=dnstest net-img
docker inspect dnstest
#”Dns”: [“”]

Exposing Ports


#Test for networking module
FROM ubuntu:15.04
RUN apt-get update && apt-get install -y iputils-ping traceroute apache2
ENTRYPOINT [“apache2ctl”]

#build the container
docker build -t=”apache-img” .

#run the container – port 5001 on the docker host to port 80 on the container – any connection coming into the host on 5001 will be forwarded to port 80 on the container
docker run -d –p 5001:80 --name=web1 apache-img

#see the ports
docker port web1

docker run -d -p 5002:80/udp --name=web2 apache-img

#What IP addresses are available on our docker host
ip -f inet a

#Specifying the host ip for port forwarding
docker run -d -p --name=web3 apache-img

#-P switch – map all exposed ports on a container, so all ports marked as exposed in the Dockerfile, to run them at high number ports on the docker host
FROM ubuntu:15.04
RUN apt-get update && apt-get install -y iputils-ping traceroute apache2
EXPOSE 80 500 600 700 800 900
ENTRYPOINT [“apache2ctl”]

docker build -t=”throw-away” .

#run with –P
docker run -d –P --name=throw throw-away

#display ports
docker port throw

Linking containers

#Link between containers only. Not for communicating to the outside world.
#Define source with an alias
docker run --name=<source_alias> -d <image_name>

docker run --name=src -d seacliffsand

#Define receiver with an alias
docker run --name=<receiver_alias> --link=src:<source_alias> -it ubuntu /bin/bash

docker run --name=rcvr --link=src:seacliffsand-src -it ubuntu /bin/bash

#Verify linkage
docker inspect rcvr

#Attach to the receiver container and review the environment variables
env | grep <source_alias>


#Docker adds the source alias to the /etc/hosts file of the receiver
cat /etc/hosts

cat /etc/hosts | grep seacliffsand-src      seacliffsand-src

#Only the receiver container knows about the source container’s networking config. The receiver can use the environment variables listed above to dynamically and programmatically configure itself.

#We can link multiple recipient containers to a single source container and a single recipient container to multiple sources.


Docker daemon logging

#Start daemon manually from the cli in debug, info, error, fatal mode
docker -d -l debug

#write logs to a file
docker -d -l debug >> logfile.txt

#Add the following line in etc/default/docker to change the logging level when docker is started as a service

Container logging

Display logs generated by the container’s PID1

docker logs <container_id>

Display the logs as a tail

docker logs <conainer_id> -f

#Suggested – If application level logs are needed by another system or if logs need to be kept (like most cases), mount a volume so that the application logs will persist outside the container.

Image troubleshooting

Intermediate images

#When there is an error while building a Dockerfile that will be an image with not tag
#Bring up that image using the image id
docker run --it <container_id> /bin/bash

The docker0 bridge

#Stop the docker service and check the ip address
service docker stop
ip a

#Delete the docker0 link
ip link del docker0

#Edit the docker config file etc/default/docker – bip refers to ‘bridge ip’

#Turn on the docker service again
docker service start

#Run a container again
docker run -it ubuntuL15.04 /bin/bash

#Check ip address again – it will have ip address 150.150.0.X
ip a

Firewall config

IPTables on the docker host

Default value of the following is true
 --icc= inter container communication
 --iptables= decides whether docker will make any modifications to iptables rules

Check iptables

iptables -L -v

Stop docker and edit the docker config file /etc/default/docker – this should stop any inter container communication


Docker will not be able to make any modifications to iptables rules and as a result icc will also be disabled


Vagrant provisioning

Use the following vagrant provisioning lines in a Vagrant file to create a new jenkins container on the Vagrant box at the time of provision. Vagrant will automatically ensure that the docker container created using this provisioning method will always run at the time of VM startup.
   config.vm.provision "shell", inline: <<-SHELL
      sudo chmod 777 -R /vagrant/jenkins_home
   config.vm.provision "docker" do |d|
                                 d.run "jenkins", args: "-p 8080:8080 -p 50000:50000 -v /vagrant/jenkins_home:/var/jenkins_home"

Delete all dangling images

docker rmi $(docker images --quiet --filter "dangling=true")

Saturday, April 2, 2016

Maven Fundamentals

1.0 High Level Overview

·         Open Source product
·         Managed by the Apache Software Foundation
·         Build tool
o   Always produces one artifact or component
o   Helps manage dependencies
·         Project management tool
o   Handles versioning and releases of your code
o   Meta info: Describes what the project is doing or what it produces
o   Easily produce JavaDocs
o   Produce other site information
·         Maven sites are built with Maven. All the Layout is made with Maven’s site generation features.

2.0 Why use Maven

·         Repeatable builds
·         Recreate a build for any environment
·         Transitive dependencies
o   Downloading dependencies also pulls other items needed
·         Contains everything you need build your code
·         Works with a local repo
o   Local repo keeps the dependencies in a local repo
o   If local repo already contains the dependency required for a particular project, Maven just references the dependency binary stored in the repo.
o   If local repo does not contain the dependency required for a project, Maven downloads the dependency from public or privately hosted maven repo to the local repo.
·         Works with IDS and also as a standalone
·         Preferred choice with build tools like Jenkins

3.0 Ant Vs Maven

3.0.1 Ant

·         Ant was developed to replace a build tool called make, which was not a cross platform tool. Make was brittle and limited to unix.
·         Ant is built on top of Java and uses XML
·         Ant is very procedural
o   It is hard to inherit anything
o   You have to go out of your way to use different pieces and have it stretched out to be able to use composition inside the ant scripts.
·         Ant isn’t a build tool
·         You have to explicitly do everything in Ant
·         Everything needs to be defined. It is very declarative.
·         No standard out there.
·         Nothing can get carried over from one project to another.
·         Each organization, team and individual can define keywords like clean, init etc. and each one could have a different meaning. Nothing is implicit.
·         No possibility of reuse. You’d have to copy over the entire ant script from one project to another if you want to reuse. Ant build.xml

3.0.2 Maven

·         Maven is a proper build tool
·         mvn clean – is always maven clean
·         Inheritance in projects is implicitly available
·         Transitive dependencies
·         Consistent across projects
·         Functionality and keywords are strongly defined and standardization is implicit.
·         In built versioning features – snapshot vs release
·         Maven follows a convention over configuration model
·         Maven is centered around managing your entire project’s lifecycle Pros and Cons Maven pom.xml

4.0 Installation Best Practices

4.0.1 Environment variables

System variables
JAVA_HOME – c:\Program Files\Java\jdk1.8.0_73
MAVEN_HOME – c:\work\tools\build\apache-maven-3.3.3
Add JAVA_HOME and MAVEN_HOME to the Path - %JAVA_HOME%/bin; %MAVEN_HOME%/bin;

5.0 Demo: Hello World

1.       Create a new project named HelloWorld in the IDE of your choice.

2.       Create a file named pom.xml and add the following xml.
       <!-- Associate group Id with your site or organization name. -->
       <!--  name of our application -->
       <!-- Version of the XML schema structure used in the project. -->
3.       Create source folder structure in the HelloWorld project - src/main/java
4.       Create a HelloWorld.java file in src/main/java and add the following code.
public HelloWorld {
              public static void main (String args[]) {
                     System.out.println("Hello World");

5.       In command prompt or shell, change directory to the project folder and execute the following commands.

mvn clean
mvn compile

This will generate a target directory with classes directory where HelloWorld.class will be stored.

mvn package

This will generate a HelloWorld-1.0-SNAPSHOT.jar file in the target directory.

6.0 Folder structure

1.       Maven looks for src/main/java by default.
a.       All Java code of our package is stored here.
b.      This is the beginning of our package declaration. Example: com.iaditya.helloworld package structure exists in src/main/java/com/iaditya/helloworld.
c.       Other languages: src/main/groovy

2.       Unit test cases are stored in src/test/java.

3.       Maven compiles all code to a target directory by referencing defaults and anything we have overridden in the pom.xml file.
a.       All unit tests get run from the target folder.
b.      Contents from this directory get packaged into jar, war or ear file.

7.0 POM file basics

7.0.1 Parts of a POM file

POM files can be classified into 4 basic parts.
1.       Project Information
a.       groupId – Often it is the package in our code.
b.      artifactId – Name of our application.
c.       version – Current version. Snapshot vs Release.
d.      packaging – How do we want to package our application? ear, war, jar or other package types.
2.       Dependencies –
a.       Actual artifacts we want to use in our application.
3.       Build
a.       Plugins – The plugins we want to use to build our code?
b.      Directory structure
                                                               i.      To override the src/main/java directory
                                                             ii.      Target name
                                                            iii.      Location of target
                                                           iv.      Location of specific resources, generated sources and xml etc.
4.       Repositories
a.       Download artifacts from repositories
b.      Optionally download from maven central
c.       Download from private maven repo hosted in your organization

7.0.2 Dependencies Intro

Often considered the most confusing part of Maven.
1.       Dependencies are imported by their naming convention.
2.       We have to know the following three of any dependency you are looking to use in your code.
a.       groupId
b.      artifactId
c.       version
3.       Added to the dependencies section in the pom.xml.
a.       List the dependency that we want to use
b.      Transitive dependencies will be pulled by Maven
       <!-- Associate group Id with your site or organization name. -->
       <!--  name of our application -->
       <!-- Version of the XML schema structure used in the project. -->

              <!-- Package -->
              <!-- Name of our application -->

7.0.3 Goals

1.       clean – deletes the target directory and any generated sources
2.       compile 
a.       Compiles the source code
b.      Generates any files
c.       Copies resources to our classes directory
3.       package
a.       Runs compile first
b.      Runs unit tests
c.       Packages the app based on the packaging type defined in pom.xml
4.       install
a.       Runs the package command and installs it in your local repo
5.       deploy
a.       Runs the install command and deploys it to a private repository.
b.      Does not deploy to an app server.

7.0.4 Running maven goals

Maven goals can be run individually.
mvn clean
mvn compile
mvn package
mvn install
mvn deploy

And they can be daisy-chained.
mvn clean package

7.0.5 Local repo

1.       Maven stores everything it downloads to a local repository folder on the hard drive. Usually, it is installed in the home directory\.m2 folder.

2.       Avoids duplication. Otherwise, we would end up copying the dependency into every project and storing it in the SCM.

3.       Stores artifacts using the information provided in the pom.xml for the dependencies. Folders named artifactId, groupId and version are created.

7.0.6 Overriding defaults in the build section

Defaults that are implicitly followed by Maven can be overridden in the build section of the pom.xml. In the following example, we are overriding the target package name. By default, the target package name in our HelloWorld example will be HelloWorld-1.0-SNAPSHOT.jar.
              <!-- Associate group Id with your site or organization name. -->
              <!--  name of our application -->
              <!-- Version of the XML schema structure used in the project. -->
                     <!-- Package -->
                     <!-- Name of our application -->
              <!-- Override the target package name. -->

8.0 Maven Dependencies

1.       Dependencies are other resources that we want to use inside of our application.

2.       Maven will pull transitive dependencies based on the dependencies we list.

3.       Minimum required info to pull dependencies:
a.       groupId
b.      artifactId
c.       version

8.0.1 Versions

1.       Development starts off as a SNAPSHOT

2.       SNAPSHOT allows us to push new code to a repository and have our IDE or command line automatically check for changes every time.

3.       If we specify a SNAPSHOT version in our pom.xml, Maven will pull down new code every time it runs and uses the new code.

4.       SNAPSHOT keyword has to be all capital. It does not work as a SNAPSHOT otherwise.

5.       This SNAPSHOT functionality saves you from re-releasing versions for development.

6.       Never deploy to production with a SNAPSHOT because we cannot reproduce or re-create that code. The next time we compile the code, the functionality could be different in the code.

7.       A release does not have to have a specific naming convention. Example: HelloWorld-1.0.jar, HelloWorld-1.0.1.jar

8.       Milestone releases like HelloWorld-1.0-M1.jar or HelloWorld-1.0-RC1.jar do not effect Maven. Such milestone or release candidate versions are published for evaluation purposes and should not be considered as release versions.

9.       RELEASE versions are sometimes named with a keyword RELEASE, although it is not necessary.

8.0.2 Types

1.       Types refer to the type of resource that we want to include inside of our application.

2.       The default and the most common type is a jar.

3.       The current core packaging types are pom, jar, maven-plugin, ejb, war, ear, rar and par.

4.       The type of pom is referred to a dependency pom.
a.       All dependencies inside the pom are downloaded into our application.
b.      Commonly referred to as a dependency pom.
c.       Example: If you have web services in your organization and you want to group all of these dependencies that we use for a anytime we want to create a web service, like Jersey libraries and other different XML dependencies, all those dependencies can be put into one pom and reference that pom in your project and Maven will download that into our application.
5.       Types refer to packaging inside our application. If we are building an artifact for other people to consume, we need to specify our packaging in the pom.

8.0.3 Transitive dependencies

1.       The main reason people begin using maven.

2.       If we add a dependency like hibernate, it will go ahead and pull down any and all transitive dependencies that hibernate needs.

3.       If there is a conflict, Maven will resolve those by choosing the newer version.

8.0.4 Scopes

Six scopes available for dependencies:
1.       compile
a.       Default scope
b.      Resources or artifacts are available everywhere inside your application.
2.       provided
a.       Like compile
b.      Means the artifact is going to be available throughout your entire build cycle, but it’s not going to be added to the final artifact.
c.       Example: servlet-api, xml-apis
3.       runtime
a.       Not needed for compilation
b.      Needed for execution
c.       Included in the runtime and test classpaths, but not the compile classpaths.
d.      Example: Dynamically loaded libraries like jdbc jars
e.      Not bundled with our final artifact
4.       Test
a.       Available for the test compilation and execution phase only.
b.      Not included in the final artifact
5.       system
a.       It is recommended NOT to use system as it is very brittle and breaks the reason for wanting to use maven.
b.      It is used for hard coding a path to a jar on your file system.
6.       import
a.       Deals with dependency management
b.      Advanced topic.
c.       Review at
                                                               i.      http://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html
                                                             ii.      https://developer.jboss.org/wiki/MavenImportScope?_sscc=t

9.0 Repositories

1.       Repositories are http accessible locations where maven looks at to download code and other artifacts to use in our application.

2.       For internal repo, it is advisable to secure it as an https accessible location.

3.       The default location of the repo is stored in the super pom.xml, which is located inside the maven installation.

4.       This super pom.xml can be overridden using settings.xml or our project’s pom.xml.

5.       Default location is http://repo.maven.apache.org/maven2.

6.       Multiple repositories are allowed.

7.       Corporate repository options:
a.       Nexus (recommended)
b.      Artifactory

9.0.1 Dependency Repo

1.       It is where we download all our dependencies from.

2.       It can contain releases or snapshots or both.

3.       It is common to have releases and snapshots in separate repositories.

4.       Sample repositories section in pom.xml.
                     <!-- User defined id -->
                     <!-- User defined desciption entered in the name tag -->
                     <name>Spring Maven SNAPSHOT Repository</name>
                     <!-- Repo url of the dependency artifact needed in your code -->

9.0.2 Plugin Repo

1.       Identical to dependency repositories, just deals with Plugins.

2.       Maven will only look for plugins in this repo as by design a plugins repo is usually a separate repository.

3.       Defined very similar to dependency repository.

4.       Snapshots and releases just like the dependency repository.

5.       Optionally, custom plugins can be created and stored in an internal or corporate repository.

9.0.3 Releases/Snapshots

1.       Snapshots and releases can come from the same repo, but usually organizations prefer to keep them separate.

2.       Most projects end up having a lot of snapshots, release candidates and milestone releases for each release. It is easier to maintain a separate repo for releases to make it easier to push truly releasable code to an app store or a customer access site.

10.0 Maven Plugins

Maven uses plugins to build and package our application beyond just downloading and storing artifacts locally.

10.0.1 Goals

1.       Goals like clean, compile etc. are plugins configured in the maven install.
2.       These goals are defined in the super pom which are added to your project’s effective pom.
3.       Goals are always tied to a phase.
4.       The default goal’s phases can be overridden in your project’s pom if required.

10.0.2 Phases

1.       validate – this validates that
a.       The project is correct
b.      It has all the plugins needed
c.       It has all the artifacts downloaded
d.      All structures are in place
e.      Has permissions to create directories.
2.       compile
a.       Our source code compilation happens.
b.      Testing code does not get compiled.
3.       test
a.       Testing code gets compiled.
b.      Testing of the compiled source code happens.
4.       package
a.       Packages all of our code in its defined packaging such as jar.
b.      It doesn’t do anything with it once it is packaged, but allows us to test to make sure that everything is in the proper order in which it should be.
c.       Usually, developers tie generating sources or JavaDoc to this phase.
5.       integration test
a.       Allows us to deploy and run integration tests
b.      New to maven 3. Not used by most yet.
6.       verify
a.       Runs checks against package to verify integrity before installing to our local repo or our private repo.
7.       install
a.       Install the package in our local repo
8.       deploy
a.       Copy final package to a remote or private repo

10.0.3 Compiler plugin

1.       The plugin used to compile the source code in our application.
2.       Used to compile both source code and test code using different phases.
3.       Invoke java by setting the classpath with the dependencies from our application.
4.       Defaults to java 1.5 regardless of which JDK is installed.
5.       The configuration section allows for customization by overriding the JDK version, memory settings etc.

              <!-- Override the target package name. -->
                                  <!-- Different for every plugin as the fields that they support can be different. -->
                                  <!-- Sometimes context sensitive help in IDE for this section is not available because it can be a Cdata field in XML. -->

10.0.4 Jar plugin

1.       Used to package our compiled code into a jar file.

2.       It is tied to the package phase of our build lifecycle.

3.       Configuration section allows for customization:
a.       Includes or Excludes – to only package certain things in your jar.
b.      Manifest – builds a manifest for our project. (useDefaultManifestFile)


10.0.5 Sources plugin

1.       Used to attach our source code to a jar.

2.       By default it is tied to the package phase.

3.       It is often overridden to a later phase like install or deploy so that the build is faster when we are developing.

                                  <!-- This is about WHEN the plugin is going to run -->

10.0.6 Javadoc plugin

1.       Used to attach Javadocs to a jar file when we upload them to our repository.

2.       It is tied to the package phase.

3.       It is often overridden to a later phase to speed up the build time while still developing the code.

4.       The defaults usually work out fine. Optionally, there are customization options for Javadoc format, like add company logo or change colors.