Tuesday, June 30, 2020

Java Principal Architect interview @ Aristocrat : Part - 2

Hi friends,

In this post, I'm sharing interview questions asked in 2nd round of Aristocrat for the position of Java Principal Architect.


You can also go through my other posts:



Question 1:

What is the purpose of using @ServletComponentScan? 

Answer:

We add @ServletComponentScan to enable scanning for @WebFilter, @WebServlet and @WebListener.

It is used on the main SpringBootApplication.java class.

Embedded containers do not support   @WebServlet, @WebFilter and @WebListener. That's why spring has introduced @ServletComponentScan annotation to support some dependent jars which use these 3 annotations.

To use @ServletComponentScan, we need to use spring boot with version 1.3.0 or above.
And we also need to add spring-boot-starter-parent and spring-boot-starter-web dependencies.

pom.xml file:

<parent>
    <groupId> org.springframework.boot </groupId>
    <artifactId> spring-boot-starter-parent</artifactId>
    <version> 1.5.1.RELEASE </version>
</parent>


<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
        <version>1.5.1.RELEASE</version>
    </dependency>
</dependencies>




Question 2:

How does Servlet work and what are the lifecycle methods?

Answer:

A servlet is a class that handles requests, processes them and reply back with a response.

e.g. we can use a servlet to collect input from user through an HTML form, , query records from a database and create web pages dynamically.

Servlets are under the control of another java application called servlet container. When an application running in a web server receives a request , the server hands the request to the servlet container - which in turn passes it to the target servlet.

Maven dependency for using servlet is given below:

<dependency>
    <groupId> javax.servlet</groupId>
    <artifactId> javax.servlet-api</artifactId>
    <version> 3.1.0</version>
</dependency>



Lifecycle methods of servlet are described below:

init(): The init method is designed to be called only once. If an instance of servlet does not exist, the web container does the following:

  • Loads the servlet class
  • Create an instance of the servlet class
  • Initializes it by calling the init() method.
The init() method must be completed successfully before the servlet can receive any requests.
The servlet container cannot place the servlet in service if the init() method either throws a ServletException or does not return within a time period defined by the web server.

public void init throws ServletException{

        // code here
}




service(): This method is only called after the servlet's init() method has completed successfully.
The container calls the service method to handle the requests coming from the client, interprets the HTTP request type (GET, PUT, POST, DELETE etc.) and calls doGet(), doPut(), doPost() and doDelete() methods.


public void service(ServletRequest req, ServletResponse response) throws ServletException{


}


destroy(): It is called by the servlet container to take the servlet out of the service.
This method is only called after all the threads in the service have exited or a time period has passed.
After the container calls this method, it will not call the service method again on the servlet.


public void destroy(){

}



Question 3:

What application server have you used and what are the benefits of that?

Answer:

I have used Weblogic application server.
Weblogic server provides various functionalities:

  • Weblogic server provides support for access protocols like HTTP, SOAP etc.
  • It also provides data access and persistence from database server.
  • It also supports SQL transactions for data integrity.
  • It also provides security.
So means, when we use Weblogic  server, we do not have to care about protocol, security, database transactions, data integrity etc. All these are handled by Weblogic server. We can focus on business logic.




That's all for this post.
Thanks for reading!!

Monday, June 29, 2020

Java Principal Architect interview @ Aristocrat - Part 1

Hi Friends,

In this post , I'm sharing interview questions-answers at Principal Architect level at Aristocrat.

Kindly go through my other interview posts:



Question 1:

What is Apache Kafka? When it is used? What is the use of Producer, Consumer, Partition, Topic, Broker and Zookeeper while using Kafka?

Answer:

Apache Kafka is a distributed publish-subscribe messaging system.
It is used for real-time streams of data and used to collect big data for real-time analysis.

Topic: A topic is a category or feed or named stream to which records are published. Kafka stores topic in logs file.

Broker: A kafka cluster is a set of servers , each of which is called a broker.

Partition: Topics are broken up into ordered commit logs called partitions. Kafka spreads those log's partitions across multiple servers or disks.

 

Producer: A producer can be any application who can publish messages to a topic. The producer does not care what partition a specific message is written to and will balance messages over every partition of a topic evenly.
Directing messages to a partition is done using message key and a partitioner. Partitioner will generate a hash of the key and map it to a partition.
Producer publishes a message in the form of key-value pair.  

Consumer: A consumer can be any application that subscribes to a topic and consume the messages.
A consumer can subscribe to one or more topics and reads the messages sequentially.
The consumer keeps track of the messages it has consumed by keeping track on the offset of messages.

Zookeeper: This is used for managing and coordinating kafka broker.   





Question 2:

What are the Kafka features?

Answer:

Kafka features are listed below:

  • High throughput : Provides support for hundreds of thousands of messages with modest hardware.
  • Scalability : Highly scalable distributed systems with no downtime.
  • Data loss : Kafka ensures no data loss once configured properly.
  • Stream Processing : Kafka can be used along with real time streaming applications like Spark and Storm.
  • Durability : Provides support for persisting messages to disk.
  • Replication : Messages can be replicated across clusters, which supports multiple subscribers.




Question 3:

What are Kafka components?

Answer:

Kafka components are: 
  • Topic
  • Partition
  • Producer
  • Consumer
  • Messages

A topic is a named category or feed name to which records are published. Topics are broken up into ordered commit logs called partitions.
Each message in a partition is assigned a sequential id called an offset.
Data in a topic is retained for a configurable period of time.
Writes to a partition is generally sequential , thereby reducing the number of hard disk seeks.

Reading messages can either be from beginning and also can rewind or skip to any point in partition by giving an offset value.

A partition is an actual storage unit of kafka messages which can be assumed as a kafka message queue. The number of partitions per topic are configurable while creating it.
Messages in a partition are segregated into multiple segments to ease finding a message by its offset.
The default size of a segment is very high, i.e. 1GB, which can be configured.

Each segment is composed of the following files:

  • Log: Messages are stored in this file.
  • Index: stores message offset and its starting position in the log file.
  • TimeIndex  




Question 4:

What is a Kafka cluster?

Answer:

Kafka cluster is a set of servers and each server is called a broker.

 

In the above diagram, Kafka cluster is a set of servers which are shown with Broker name. Producers publish messages to topics in these brokers and Consumers subscribe to these topics and consume messages.

Zookeeper is used to manage the coordination among kafka servers.
 


Question 5:

What problem does Kafka resolve?

Answer:

Without any messaging queue implementation, what the communication between client nodes and server nodes look alike is shown below:




There is a large numbers of data pipelines which are used for communication. It is very difficult to update this system or add another node.

If we use Kafka, then the entire system will look like something:



So, all the client servers will send messages to topics in Kafka and all backend servers will consume messages from kafka topics.




That's all for this post.
Thanks for reading!!



Saturday, June 20, 2020

Java Interview @ Fresher Level - 3

Hi friends,

In this post, I'm sharing interview questions asked @ fresher level.

You can also go through my other fresher level interview posts:



Question 1:

Name the types of  SQL databases.

Answer:

There are multiple SQL databases. Few are listed below:

  • MySQL
  • SQL Server
  • Oracle 
  • Postgres


Question 2:

What is the difference between MySQL and SQL Server?

Answer:

There are multiple differences between MySQL and SQL server databases:

  • MySQL is an open source RDBMS [owned by Oracle] , whereas SQL Server is a microsoft product.
  • MySQL supports more programming languages than supported by SQL server. e.g.: MySQL supports Perl, Scheme, Tcl, Eiffel etc.  which are not supported by SQL server.
  • Multiple storage engine[InnoDB, MyISAM] support makes MySQL  more flexible than SQL Server.
  • While using MySQL, RDBMS blocks the database while backing up the data. And the data restoration process is time-consuming due to execution of multiple SQL statements. Unlike MySQL, SQL server does not block the database while backing up the data.
  • SQL server is more secure than MySQL. MySQL allows database file to be accessed and manipulated by other processes at runtime. But SQL server  does not allow any process to access or manipulate it's database files or binaries.
  • MySQL doesn't allow to cancel a query mid-execution. On the other hand, SQL server allows us to cancel a query execution mid-way in the process.     



Question 3:

Why should we never compare Integer using == operator?

Answer:

Java 5 provides autoboxing/unboxing.
So, we store int to wrapper class Integer. But we should not use == operator to compare Integer objects. e.g.:

Integer i = 127;
Integer j = 127;

i == j will give true.

Integer ii =  128;
Integer jj = 128;

ii == jj  will give false.

It is so because, Integer.valueOf() method caches int values ranging from -127 to 127. So, between this range, it will return same object.
After that, it will create new object.

Note:
Integer i = 127;
Integer j = new Integer(128);

Now, == operator will give false. As , new operator will create a new object.

 

Question 4:

Why to use lock classes from concurrent API when we have synchronization?

Answer:

Lock classes in concurrent API provide fine-grained control over locking.

Interfaces: Lock, ReadWriteLock
Classes: ReentrantLock, ReentrantReadWriteLock



Question 5:

What are the methods for fine-grained control?

Answer:

ReentrantLock provides multiple methods for more fine-grained control:

  • isLocked()
  • tryLock()
  • tryLock(long milliseconds, TimeUnit tu)
The tryLock() method tries to acquire the lock without pausing the thread. That is, if the thread couldn't acquire the lock because it was held by another thread, then it returns immediately instead of waiting for the lock to be released.

We can also specify a timeout in the tryLock() method to wait for the lock to be available:

lock.tryLock(1, TimeUnit.SECONDS);

The thread will now pause for one second and wait for the lock to be available. If the lock couldn't be acquired within 1 second , then the thread returns.



Question 6:

What is ReentrantReadWriteLock?

Answer:

ReadWriteLock consists of a pair of locks - one for read and one for write access. The read lock may be held by multiple threads simultaneously as long as the write lock is not held by any thread.

ReadWriteLock allows for an increased level of concurrency. It performs better compared to other locks in applications where there are fewer writes than reads. 


Question 7:

What is the difference between Lock and synchronized keyword?

Answer:

Following are the differences between Lock and synchronized keyword:

  • Having a timeout trying to get access to a synchronized block is not possible. Using lock.tryLock(long timeout, TimeUnit tu), it is possible.
  • The synchronized block must be fully contained within a single method. A lock can have it's calls to lock() and unlock() in separate methods.


Question 8:

Why to use Executor framework?

Answer:

We can use executor framework to decouple command submission from command execution.

Executor framework gives us the ability to create and manage threads. 

There are 3 interfaces defined in executor framework:

  • Executor
  • ExecutorService
  • ScheduledExecutorService




Question 9:

How many groups of collection interfaces are there?

Answer:

There are two groups of collection interfaces:

  • Collection
    • List
      • ArrayList
      • Vector
      • LinkedList
    • Queue
      • LinkedList
      • PriorityQueue
    • Set
      • HashSet
      • LinkedHashSet
      • SortedSet
        • TreeSet
  • Map
    • HashMap
    • HashTable
    • SortedMap
      • TreeMap



Question 10:

What is the definition of Iterable interface?

Answer:

public interface Iterable<T>{

    public Iterator<T> iterator();
}


That's all for this post.
Thanks for reading!!



Java Interview @ Polaris

Hi friends,

In this post, I'm sharing interview questions asked in Polaris.


Question 1:

public class Quiz23{

    public static void main(String[] args){

        int x = 0;
        int[] nums = {1,2,3,5};

        for(int i : nums){

            switch(i){

                case 1:
                    x += i;
                case 5:
                    x += i;
                default:
                    x += i;
                case 2:
                    x  += i;

            }
            System.out.println(x);
        }
    }
}

What will be the output of above code?

Answer:

27

Explanation:

For case i = 1:  All the cases will run.
For case i = 2: only case 2 will run.
For case i = 3: Default and case 2 will run.
For case i = 5: case , default and case 2 will run.



Question 2:

What is the difference between Map and FlatMap?

Answer:

Map: Transforms the elements into something else. It accepts a function to apply to each element and returns a new Stream of values returned by the passed function.

It takes function object as the parameter e.g.: Function, ToIntFunction, ToIntDoubleFunction, ToLongFunction.


FlatMap: Combination of Map and Flat operations.So, we first apply map operation on each element and then flattens the result.

So, if function used by Map is returning a single value, then map is ok. But, if function used by Map operation is returning a stream of list or stream of stream, then we need to use flatmap to get stream of values.

For example:

If we have a stream of String containing {"12", "34"} and a method getPermutations() which returns a list of permutations of given string.

When we apply getPermutation() into each string of Stream using map , we get something like [["12","21"], ["34","43"]], but if we use flatMap, we get a stream of strings e.g.: ["12","21","34","43"].

Another example:

List evens = Arrays.asList(2,4,6);
List odds = Arras.asList(3,5,7);
List primes = Arrays.asList(2,3,5,7,11);

List numbers = Stream.of(evens, odds, primes).flatMap(list -> list.stream()).collect(Collectors.toList());

System.out.println("flattened list : "+numbers);

Output: flattened list : [2,4,6,3,5,7,2,3,5,7,11]


 
Question 3:

What is the difference between passing int array and String array to Stream.of()?

Answer:

Stream.of(int[]) gives Stream<int[]>
Stream.of(String[]) gives Stream<String>

So, when using Stream.of() with int[] , we get Stream<int[]> and then for getting ints from Stream, we use flatMapToInt(i -> Arrays.Stream(i)) to get IntStream and then we can either use map() or forEach().

e.g.:

int[] arr = {1,2,3,4};
Stream<int[]> streamArr = Stream.of(arr);
IntStream intStream = streamArr.flatMapToInt(i -> Arrays.Stream(i));
intStream.forEach(System.out :: println);



Question 4:

What is a Boxed Stream?

Answer:

If we want to convert stream of objects to collection:

List<String> strings = Stream.of("how", "to", "do", "in", "java").collect(Collectors.toList());

The same process doesn't work on streams of primitives, however.

//Compilation Error:
IntStream.of(1,2,3,4,5).collect(Collectors.toList());

To convert a stream of primitives, we must first box the elements in their wrapper class and then collect them. This type of stream is called boxed stream.

Example of IntStream to List of Integers:

List<Integer> ints = IntStream.of(1,2,3,4,5).boxed().collect(Collectors.toList());

System.out.println(ints);

Output: [1,2,3,4,5]

 

Question 5:

List Spring Core and Stereotype annotations.

Answer:

Spring Core Annotations:
  • @Qualifier
  • @Autowired
  • @Configuration
  • @ComponentScan
  • @Required
  • @Bean
  • @Lazy
  • @Value

Spring framework Stereotype annotations:
  • @Component
  • @Controller
  • @Service
  • @Repository


Question 6:

What is difference  between @Controller and @RestController?

Answer:

@Controller creates a Map of model object and finds a view for that.
@RestController simply returns the object and object data is directly written into HTTP response as JSON or XML.

@RestController = @Controller + @ResponseBody
@RestController was added in  Spring 4.



Question 7:

Why Spring introduced @RestController when this job could be done by using @Controller and @ResponseBody?

Answer:

The functioning or output of @Controller and @ResponseBody is the default behavior of RESTFUL web services, that's why Spring introduced @RestController which combined the behavior of @Controller and @ResponseBody.  



Question 8:

Tell something about Swagger tools which you have used.

Answer:

Swagger Editor:  Can edit OpenAPI specifications.

Swagger UI : A collection of HTML/CSS/Javascript assets that dynamically generate beautiful documentation.

Swagger Codegen: Allows generation of API client libraries.

Swagger Parser: Library for parsing OpenAPI definitions from java.

Swagger Core: Java related libraries for creating, consuming and working with OpenAPI definitions.

Swagger Inspector: API testing tool

SwaggerHub : Built for teams working with OpenAPI.








That's all for this post.
Thanks for reading!!

Friday, June 19, 2020

Java Technical Interview @ IBM

Hi Friends,

In this post, I'm sharing interview questions answers asked in IBM @ experience level of 13 years.



Question 1:

How locking mechanism is implemented by JVM?

Answer:

The implementation of locking mechanism in java is specific to the instruction set of the java platform.

For example with x86, it might use the CMPXCHG - atomic compare and exchange - at the lowest level to implement the fast path of the lock.

The CMPXCHG instruction is a compare-and-swap instruction that guarantees atomic memory access at the hardware level.

If the thread cannot acquire the lock immediately, then it could spinlock or it could perform a syscall to schedule a different thread.
Different strategies are used depending on the platform, JVM switches.



Question 2:

Which Event bus is used by Saga pattern?

Answer:

When we use event based communication, a microservice publishes an event when something notable happens such as when it updates a business entity. Other  microservices subscribe to those events.

When a microservice receives an event, it can update it's own business entities which might lead to more events being published.

This publish/subscribe system is usually performed by using an implementation of an event bus.
The event bus is designed as an interface with the API needed to subscribe and unsubscribe to events and to publish events.
The implementation of this event bus can be a messaging queue like RabbitMQ or service bus like Azure service bus.   


Question 3:

What is the use of Amazon EC2? What are the steps to deploy on EC2? 

Answer:

Amazon EC2: Amazon Elastic Compute Cloud
It offers ability to run applications on the public cloud.

It eliminates investment for hardware. There is no need to maintain the hardware.We can use EC2 to launch as many servers as we need.

Steps to deploy on EC2:

  • Launch an EC2 instance and SSH into it. Note: This instance needs to be created first on Amazon console [console.aws.amazon.com]. And we should also have certificate to connect to EC2 instance.
  • Install Node on EC2 instance, if our app is in Angular.
  • Copy paste code on EC2 instance and install all dependencies.
  • Start server to run.


OR:

  • Build Spring boot app in the local machine. Make .jar file.
  • Upload this .jar on S3.
  • Create EC2 instance.
  • SSH into it from the local computer. Now, we are in EC2 instance.
  • We can install JDK now.
  • And using java - .jar file path, we can run our application.


Question 4:

Why should we use ThreadPoolExecutor, when we have Executor Framework?

Answer:

Source code of Executors.newFixedThreadPool() is:

public static ExecutorService newFixedThreadPool(int nThreads){

    return new ThreadpoolExecutor(nThreads, nThreads, oL,       TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());


This method uses ThreadPoolExecutor class which uses default configuration as is seen in above code.
Now, there are scenarios where default configuration is not suitable, say instead of LinkedBlockingQueue, a PriorityQueue needs to be used etc. 
In such cases, caller can directly work on underlying ThreadPoolExecutor by instantiating it and passing desired configuration to it.

Note: One advantage of using ThreadPoolExecutor is that we can handle RejectedExecutionException using ThreadPoolExecutor.discardPolicy().



Question 5:

What is the difference between Spring 2 and Spring 5?

Answer:

Below are the differences between Spring 2 and Spring 5:

  • JDK baseline update
  • Core framework revision
  • Reactive programming model
  • Core Container updates
  • Testing improvements


Question 6:

What is the difference between @SpringBootApplication and @EnableAutoConfiguration?

Answer:

Following are the differences between @SpringBootApplication and @EnableAutoConfiguration:

Availability: @SpringBootApplication was introduced in version 1.2, while @EnableAutoConfiguration was introduced in version 1.0.

Purpose: @EnableAutoConfiguration enables auto configuration feature of Spring Boot application which automatically configures things if certain classes are present in classpath e.g. it can configure Thymeleaf  TemplateResolver and ViewResolver  if Thymeleaf is present in the classpath.

On the other hand, @SpringBootApplication does three things:

  • It allows us to run the main class as a jar with an embedded container [Web server Tomcat].
  • It enables java configuration.
  • It enables component scanning. 


Question 7:

What happens when we call SpringApplication.run() method in main class of SpringBoot application?

Answer:

Syntax of the class containing main method looks like code below:

@SpringBootApplication
public class StudentApplication{

    public static void main(String[] args){
    
        return SpringBootApplication.run(StudentApplication.class, args);
    }

}

When we run this class as a java application, then our application gets started.

SpringApplication.run() is a static method and it returns an object of  ConfigurableApplicationContext.

ConfigurableApplicationContext ctx = SpringApplication.run(StudentApplication.class, args);

Thus, Spring container gets started once run() method gets called.

Spring container once started is responsible for:

  • Creating all objects: This is done by @ComponentScan. Remember @SpringBootApplication is a combination of @ComponentScan + @Configuration + @EnaleAutoConfiguration
  • Dependency Injection
  • Managing the lifecycles of all beans.


That's all for this post.
Thanks for reading!!

Tuesday, June 16, 2020

Java Interview @ Fresher Level - 2

Hi Friends,

In this post, I am sharing interview questions for java freshers for round - 2.




Question 1:

Write down the definition of Iterable interface.

Answer:

public interface Iterable{

    public Iterator<T> iterator();
}


Question 2:

How does HashSet work?

Answer:

When we create HashSet instance, it internally creates instance of HashMap in its constructor. So when we add a duplicate key, that key is compared in HashMap entries. Actually, it is put in HashMap.


So, when we call add(3), then the value 3 is stored as key in HashMap and some dummy value [new Object] is stored as value.


So, in HashSet add() method is defined as:

public boolean add(E e){

        return map.put(e, PRESENT) == null;
}

Note: map.put() returns null, if it adds key-value pair. If key already exists , it returns value.
So, if key is not present in HashMap, HashMap returns null and add() method of HashSet returns true.Else it returns false.



Question 3:

What copy technique is originally used by HashSet clone() method?

Answer:

There are two copy techniques in every object oriented programming language: deep copy and shallow copy.

To create a clone or copy of the Set object, HashSet internally  uses shallow copy in clone() method, the elements themselves are not cloned. In other words, a shallow copy is made by copying the reference of the object.

 

Question 4:

What is the difference between HashSet and TreeSet?

Answer:

There are multiple differences between HashSet and TreeSet which are described below:

  • Ordering of the elements
  • Null Value: TreeSet doesn't allow null value.
  • HashSet implements Set interface while TreeSet implements NavigableTreeSet interface.
  • HashSet uses equals() method for comparison while TreeSet uses compareTo() method for comparison.
Note: Underlying data structure of TreeSet is Red-Black Tree which is a BST and thus is sorted. For it to be sorted, it uses comparator. The default comparator is not null safe, that's why TreeSet doesn't allow null value.

So, when we call TreeSet.add(null);  , It compiles but at runtime, it throws NullPointerException.
 


Question 5:

How fail fast iterator come to know that the internal structure is modified? 

Answer:

Iterator read internal data structure (object array) directly. The internal data structure  (i.e. object array) should not be modified while iterating through the collection.

To ensure this, it maintains an internal flag "mods". Iterator checks the mods flag , whenever it gets the next value (using hasNext()  method  and next() method). Value of mods flag changes whenever there is an structural modification. Thus indicating iterator to throw ConcurrentModificationException.
 


Question 6:

What is fail safe iterator?

Answer:

Fail safe iterator makes copy of the internal data structure (object array) and iterates over the copied data structure.
Any structural modification done to the iterator affects the copied data structure. So , original data structure remains structurally unchanged.
Hence, no ConcurrentModificationException is thrown by the fail safe iterator.

 

Question 7:

What is the difference between Collections and Collection?

Answer:

java.util.Collections is a class which contains static methods only and most of the methods throw NullPointerException if object or class passed to them is null.

java.util.Collection is an interface. Which is the base interface for all other collections like List, Map, Set etc.


Question 8:

What is the difference between Iterator and ListIterator?

Answer:

We can use Iterator to traverse Set and List collections whereas ListIterator can be used with Lists only.

Iterator can traverse in forward direction only whereas ListIterator can be used to traverse in both the directions.


Question 9:

How many ways are there to traverse or loop Map, HashMap , TreeMap in java?

Answer:

We have 4 ways to traverse:

  • Take map.keySet() and loop using foreach loop
  • Take map.keySet() and loop using Iterator
  • Take map.entrySet() and loop using foreach loop
  • Take map.entrySet() and loop using iterator.


That's all from this post.
Thanks for reading!!

Monday, June 15, 2020

Interview @ Xebia: Round - 2

Hi Friends,

In this post, I am sharing interview questions and answers asked in 2nd round for Java Technical Architect @ Xebia.


Question 1:

What are the different ways of managing REST API versioning?  And how do you manage the versioning for some update() method for the new client?


Answer:

There are multiple ways of managing the REST API versioning:

  • Through a URI path:  We include the version number in the URI path of the endpoint. e.g.:  api/v1/shares
  • Through query parameters: We pass the version number as a query parameter with a specified name e.g.: api/shares?version=1
  • Through custom HTTP headers: We define a new header that contains the version number in the request.
  • Through a content negotiation: The version number is included in the "Accept" header together with the accepted content type.

Suppose we have an update method with endpoint given below. Initially, it was accessible under /v1.0 path. Now, it is available under /v1.1/{id} path.


@PutMapping("/v1.0")
public ShareOld update(@RequestBody ShareOld share){

    return (ShareOld)repository.update(share);

}

@PutMapping("/v1.1/{id}")
public ShareOld update(@PathVariable("id") long id,  @RequestBody ShareOld share){

    return (ShareOld)repository.update(share);

}


And if have a GET mapping that remains same for both versions, then , we can write GET mapping as:

@GetMapping("/v1.0/{id}", "/v1.1/{id}")
public Share findByIdOld(@PathVariable("id") long Id){
    
    return (Share)repository.findById(id);
}


In this, as we have 2 different versions of our API, we need to create two Docket objects using api() method call.


e.g.:

@Bean
public Docket swaggerShareApi10(){

    return new Docket(DocumetationType.SWAGGER_2)
                    .groupName("share-api-1.0")
                    .select()                                .apis(RequestHandlerSelectors.basePackage("pl.piomin.services.versioning.controller"))
.paths(regex("/share/v1.0.*"))
.build()
.apiInfo(new ApiInfoBuilder().version("1.0").title("Share API").description("Documentation Share API v1.0"));

}



@Bean
public Docket swaggerShareApi11(){

    return new Docket(DocumetationType.SWAGGER_2)
                    .groupName("share-api-1.1")
                    .select()                    .apis(RequestHandlerSelectors.basePackage("pl.piomin.services.versioning.controller"))
.paths(regex("/share/v1.1.*"))
.build()
.apiInfo(new ApiInfoBuilder().version("1.1").title("Share API").description("Documentation Share API v1.1"));

}


Now, when we launch Swagger UI, it shows us the dropdown displaying both versions and we can easily switch between them.



Question 2:

What is the difference between MongoDB and Cassandra?

Answer:

  • MongoDB is free and open source , cross platform , document oriented database system. While Cassandra is open source, distributed and decentralized , column-oriented database system.
  • MongoDB does not have triggers while Cassandra has triggers.
  • MongoDB has secondary indexes while Cassandra has restricted secondary indexes.
  • Cassandra uses a selectable replication factor while MongoDB uses a master-slave replication factor.
  • MongoDb is used when we need to store data in JSON style format in some documents which consists of key-value pairs.  While Cassandra is used as decentralized database for big data.




Question 3:

What are the steps for using MongoDB in java project?

Answer:

Steps for using MongoDB are described below:

  • Need to add following dependencies in build.gradle file:
    • compile group: 'org.mongodb', name: 'mongodb-driver', version: '3.11.0'
    • compile group: 'org.mongodb' name: 'bson', version: '3.11.0'
    • compile group: 'org.mongodb' name:'mongodb-driver-core', version: '3.11.0'
    • compile group: 'org.mongodb' name:'mongo-java-driver', version: '3.11.0'
  • Specify the mongo DB URL and mongo DB name in application.yml file.
  • Create MongoClient instance using MongoCredential, MongoClientOptions.Builder and a list of Mongo server addresses.
  • Create MongoCollection bean.
  • Create a class implementing HealthIndicator interface overriding the health() method.
  • Now in the DAO implementation class, call MongoCollection methods such as find(), insertOne(),  countDocuments(), count(), bulkWrite() etc.

That's all for this post.
Thanks for reading!



CAP Theorem and external configuration in microservices

 Hi friends, In this post, I will explain about CAP Theorem and setting external configurations in microservices. Question 1: What is CAP Th...