It is not a trivial task to decompose large monolithic enterprise applications and migrate them into multiple containers architecture without redesign. In this presentation, we dive deeply into the decomposition process and show the ways of making it smooth with the help of auto-scaling and load balancing. Find out the main road blockers and possible solutions during migration of GlassFish based applications from VMs to Containers based on the real experience.
6. Scalability
Unlike VMs, resizing of resource limits in Containers:
● can be performed without reboot of the running instances
● easier achieved on the fly
● cheaper and faster than moving to larger VMs
7. Efficiency
Virtual Machines Containers on Bare Metal
Container technology unlocks a new level of flexibility – resources that are not consumed within the
limit boundaries are automatically shared with other containers running on the same hardware node.
https://www.infoq.com/articles/java-cloud-cost-reduction
16. Scaling GlassFish Server in VM
● Provision a new VM with preconfigured GlassFish template
● Configure SSH connect and add this VM as an SSH Node to the DAS
● Create a new remote Worker Instance on a Node via DAS UI or asadmin CLI
17. Application Container + System Container
Pros
● IP, hostname and local stored
data survive downtimes
● No need in a port mapping
● Better isolation and virtualization
of resources
● Compatible with SSH based
config tools
● Provide live migration of memory
state
Cons
● Start-up time is slower
19. Scaling GlassFish in Containers
Worker nodes can be added / removed automatically as well as can be attached to
DAS node using container orchestration platform and a set of automation scripts
21. Database Connection
● Connect to the current database in VM
● OR decompose database to containers to gain the benefits of easy scaling
and better resource utilization
23. Scenario
● Create standalone GlassFish
● Create MySQL master-slave database cluster
● Connect GlassFish to Databases
● Deploy application
● Scale GlassFish and add NGINX load balancer
● Clone the environment
● Install Traffic Distributor and connect it to the two envs (original and clone)