Tuesday, March 31, 2020

Interview @ Sirion Labs - Round - 2

Hi Friends, 

In this interview post, I'm sharing interview questions-answers on cloud, swagger, database sharding etc.


Question 1:

What is Swagger and what are annotations used by swagger?

Answer:

It is a tool for developing APIs specification with the OpenAPI specification.


Swagger vs OpenAPI:

OpenAPI : Specification
Swagger : Tool for implementing the specification

Annotations used by Swagger:


  • @Api
  • @ApiModel
  • @ApiModelProperty
  • @ApiParam
  • @ApiResponse
  • @ApiResponses



Question 2:

What are the steps to use Swagger?

Answer:

Steps to use Swagger are defined below:


  • Put dependencies :
    • <dependency>
    • <groupId> io.springfox</groupId>
    • <artifactId>springfox-swagger2</artifactId>
    • </dependency>
  • Also add dependency  <artifactId>springfox-swagger-ui</artifactId>
  • Create a swagger config file [.java file] annotated with @EnableSwagger2 and should have accompanying @Configuration annotation. In this class, create a Docket bean by calling api() method.
  • Use @Api on controller class.
  • Use @ApiOperation and @ApiResponses on methods.
  • Use @ApiModel on entity class.



Question 3:

What is a cloud? What are the benefits of using it?

Answer:

A cloud is actually a collection of web servers [instead of a single server] owned by a 3rd party.

Cloud provides inexpensive ,efficient and flexible alternatives to computers.

Benefits of cloud:


  • No need of extra space required to keep all the hardware [Servers, digital storage]
  • Companies don't need to buy software or software license for all it's employees. They just pay a small fees to the cloud computing company to let their employees access a suite of software online.
  • It also reduces IT problems and costs.


Question 4:

What is sharding ? When we need sharding? How to implement MongoDB sharding?

Answer:

Sharding involves breaking up of one data set into multiple smaller chunks, called logical shards. The logical shards can then be distributed across separate database nodes,referred to as physical shards , which can hold multiple logical shards.

Sharding is a method of splitting and storing a single logical dataset in multiple databases.

Sharding adds more servers to database and automatically balances data and load across various servers. These databases are called shards.


When and why we need sharding?


  • When the dataset outgrows the storage capacity of a single database instance.
  • When a single instance of DB is unable to manage write operations.


How to implement MongoDB sharding?

When deploying sharding, we need to choose a key from a collection and split the data using the key's value.

Task that the key performs:


  • Determines document distribution among the different shards in a cluster. 







Choosing the correct shard key:

To enhance and optimize the performance , functioning and capability of the DB, we need to choose the correct shard key.

Choosing correct shard key depends upon two factors:


  • The schema of the data.
  • The way database applications query and perform write operations.



Question 5:

How an application is deployed to Amazon EC2 ?

Answer:

Amazon EC2 offers ability to run applications on cloud.

Steps to deploy application on EC2:


  • Launch an EC2 instance and SSH into it. This instance needs to be created first on AWS Console [console.aws.amazon.com]. And we should have certificate to connect to EC2 instance.
  • Install Node on EC2, if our app is in angular.
  • Copy paste code on EC2 and install dependencies.
  • Start server to run.

Another way:

Suppose we have Spring Boot application.
  •  Build out Spring Boot app in our local computer. Make .jar file.
  • Upload this .jar file on S3.
  • Create EC2 instance.
  • SSH into EC2 from our local computer.
  • Now, we are in EC2 instance.
  • Now install JDK.
  • And using java - .jar file path, we can run our application. 



Question 6:

What is serverless Architecture?


Answer:

ServerLess Architecture means: Focus on the application, not the infrastructure.

Serverless is a cloud computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers.

A serverless application runs in a stateless container that are event-triggered and fully managed by the cloud provider.

Pricing is based on the number of executions rather than pre-purchased compute capacity.

Serverless computing is an execution model where the cloud provider is responsible for executing a piece of code by dynamically allocating the resources.
And only charging for the amount of the resources used to run the code. The code typically runs inside stateless compute containers  that can be triggered by a variety of events including HTTP requests, database events, file uploads, scheduled events etc.


The code that is sent to the cloud provider is in the form of a function. Hence serverless is sometimes also referred to as "Functions as a Service".

Following are the FaaS offerings of the major cloud providers:


  • AWS Lambdas
  • Microsoft Azure : Azure Functions
  • Google Cloud : Cloud Functions
  • IBM Openwhishk
  • Auth0 Webtask   




That's all from this interview.
Hope this post helps everybody in their interviews.
Thanks for reading!!


Monday, March 30, 2020

Interview @ SirionLabs

Hi friends. Recently, I have given interview @ SirionLabs, Gurgaon.

So, sharing all the interview questions asked in SirionLabs here.

Questions are mainly on Architectural designs.


You can also go through my other interview posts:






Question 1:

What is the difference between Saga and 2 Phase Commit patterns?

Answer:

Saga and 2 Phase Commit patterns are used for distributed transactions.
A distributed transaction would do what a transaction would do but on multiple databases.

Differences:


  • 2 Phase Commit is for immediate transactions. Saga pattern is for long running transactions. 
  • In 2 Phase Commit, there is a single controlling node and if it goes down, then entire system fails.  In Saga Pattern, There are multiple coordinators: Saga Execution Coordinator.
  • 2 Phase Commit works in 2 steps: Prepare and Commit. And the result of all the transactions are seen to the outer world at once together. But in Saga, there is only one step for execution and commit, but if any transaction fails, then compensatory transaction gets executed for rollback operation.




Question 2:

Which database should be used when storing billions of payment requests come to database and why?

Answer:

In this case, data is relational in nature like payment amount, user id, payment date, payment location, account number etc.

So, I can easily use SQL Server 2016 with proper indexes on proper fields.




Question 3:

Which Event Bus is used by Saga pattern?

Answer:

When we use event-based communication, a microservice publish an event when something notable happens such as when it updates a business entity. Other microservices subscribe to these events.
When a microservice receives an event, it can update it's own business entities which might lead to more events being published.

This publish/subscribe system is usually performed by using an implementation of an event bus.
The event bus is designed as an interface with the API needed to subscribe and unsubscribe to events and to publish events.
The implementation of this event bus can be a messaging queue like RabbitMQ or service bus like Azure service bus.



Question 4:

Explain ACID properties of database.


Answer :

A transaction is a single unit of work which accesses and possibly modifies the contents of a database. Transactions access data using read and write operations.
In order to maintain consistency in a database, before and after transaction, certain properties are followed. These are called ACID properties.

Atomicity:

By this, we mean that either the entire transaction takes place at once or doesn't happen at all.


Consistency:

This means that integrity constraints must be maintained so that the database is consistent before and after the transaction. It refers to the correctness of a database.


Isolation:

This property ensures that multiple transactions can occur concurrently without leading to inconsistency of database state. Transactions occur independently without interference.
Changes occuring in a particular transaction will not be visible to any other transaction until that particular change in that transaction is written to memory or has been committed.


Durability:

This property ensures that once the transaction has completed execution, the updates and modifications to the database are stored in and written to disk and they persist even if system failure occurs. The effects of the transaction , thus, are never lost.



Question 5:

What is difference between MVC and MVP architectures?

Answer:


MVC Architecture:

Model-View-Controller





MVP Architecture:

Model-View-Presenter



MVP is just like MVC , only difference is that Controller is replaced with Presenter.
View and Presenter are decoupled from each other using an interface. This interface is defined in Presenter .

View implements this interface and communicates with Presenter through this interface. Due to this loose coupling it is easy to mock View during unit testing.




That's all for this post.

Hope this post helps everyone in interviews.

Thanks for reading.


Friday, March 27, 2020

Technical Architect interview in Incedo - Part - II

Hi Friends,

In this post I'm sharing interview questions-answers from one more interview @ Incedo.

Also go through my other interesting interview posts:




Technical Architect interview in Incedo :


Question 1:

Explain when and why to use Factory Design Pattern.

Answer 1:

Factory Design Pattern is used in following scenarios:


  • When we need to separate  the logic of object creation from our other code or client code.
  • Whenever we need the ability to add more subclasses to the system and modified them without changing the client code.
  • We let subclasses to decide which object to instantiate by overriding the factory method.

UML diagram for Factory Design Pattern is :





Question 2:

Explain when and why to use Abstract Factory Design Pattern.

Answer 2:


Whenever we have to work with two or more objects  which work together forming a set/kit and there can be multiple sets or kits  that can be created by client code, then we will use Abstract Factory Design Pattern.

Lets take an example of a War Game:

Medieval:
  • Land Unit
  • Naval Unit
Industrial:
  • Land Unit
  • Naval Unit

Here , we have two types of objects Land Unit and Naval Unit under Medieval age.
And also two types of objects under Industrial age.

So, we can take two factories : Medieval Age Factory and Industrial Age Factory.

UML Diagram for above scenario is :







Another example we can take is :







Question 3:

When and why to use Generic class? Explain with example. Can we use generics with array?

Answer 3:

A generic type is a type with formal type parameters.

e.g.:

interface Collection<E>{

    public void add(E e);
    public Iterator<E> iterator();

}

A parameterized type is an instantiation of a generic type with actual type arguments.

Example of a parameterized type:

Collection<String> coll = new LinkedList<String>();


Generic class is used when we need to create similar types of object with similar features.

e.g.:

Suppose , we are developing a database library called Data Access Object (DAO) for a program that manages resources in a university.

We would write a DAO class for managing Students like:

public class StudentDAO{

    public void save(Student st){
          // some code here ....
    }

    public Student get(long id){

        // Some code here ...
    }

}

This looks fine. But if we need to persist Professor objects to database. Then we would write another DAO class as:

public class ProfessorDAO{

    public void save(Professor st){
          // some code here ....
    }

    public Professor get(long id){

        // Some code here ...
    }

}

Note: These two classes are same. And if we need to add more entity classes to the system like course, facility etc, then?

So, to avoid such situation ,we write a Generic class:

public class GeneralDAO<T>{

    public void save(T entity){

    }

    public T  get(long id){

    }

}

Here T is called type parameter of GeneralDAO class. T stands for any type. The following code illustrates how to use this generic class:

GeneralDAO<Student> studentDAO = new GeneralDAO<Student>();

Student student = new Student();

studentDAO.save(student);

Note: Type parameters are never added in names of methods and constructors. They are only added in names of classes and interfaces.




Question 4:

What is Type Erasure?

Answer 4:

Generics provide compile time type safety and ensures we only insert correct type in Collection and avoids ClassCastException.

Generics in java are implemented using Type Erasures.

It is a process of enforcing type constraints only at compile time and discarding the element type information at runtime.

There are two types of Type Erasures:


  • Class Type Erasure
  • Method Type Erasure 


e.g.:

public static <E> boolean  containsElement(E[] elements , E element){

    for(E e : elements){

        if(e.equals(element)){
            return true;
        }
    }
    return false;
}

When compiled, the unbound type E gets replaced with an actual type of Object as:

public static  boolean  containsElement(Object[] elements , Object element){

    for(Object e : elements){

        if(e.equals(element)){
            return true;
        }
    }
    return false;
}

The compiler ensures type safety of our code and prevents runtime errors.



Question 5:

Explain Exception handling in java.


Answer 5:

Types of Exceptions:


  • Checked Exceptions
  • Unchecked Exceptions
  • Error

Checked Exception: Exceptions that are directly inherit from Throwable class except RuntimeException and Error are known as checked exceptions.


Unchecked Exceptions: The classes which inherit RuntimeException are known as unchecked exception.








Question 6:

What is static import in java? How to use it?

Answer 6:

Static import allows developers to access static members of a class directly.There is no need to qualify it by the class name.

Example:

import static java.lang.System.*;

class StaticImport{

    public static void main(String[] args){

        out.println("Ok"); // No need to use System.out
        out.println("Java");
    }

}

O/P:

Ok
Java



That's all for this post.

Hope these interview questions help everybody.

Thanks for reading !!



Wednesday, March 25, 2020

DAO, DTO, VO, POJO, Spring Bean , @Transactional, Service Layer, Repository Layer

Hi Friends,

In this post, I'm highlighting some important concepts used in Spring.

For interview questions and answers , you can go through these posts:




DTO: DTO is an abbreviation for Data Transfer Object. So, it is used to transfer the data between classes and modules of your application.

Few points to remember while using DTO:

  • DTO should contain only private fields for the data , setters , getters and constructors.
  • DTO is not recommended to add business logic methods to such classes , but it is ok to add some util methods.

Example of DTO:

interface StudentDTO{

    String getName();
    void setName(String name);
}



DAO: DAO is an abbreviation for Data Access Object, so it should contain logic for retrieving, saving and  updating data in your data storage.


interface StudentDAO{

    StudentDTO findById(long id);
    void save(StudentDTO student);
}


VO: VO is an abbreviation for Value Object.  A Value object is an object such as java.lang.Integer that hold values (hence value objects).



POJO: POJO meant for Plain Old Java Object. Means it is an ordinary light weight object and not a special object and in particular not an enterprise java bean. 



Java Bean : A java bean is a class that follows the JavaBeans conventions as defined by Sun.
They are used to encapsulate many objects into a single object (the bean), so that they can be passed around as a single bean object instead of as multiple individual objects.

A javabean is a java object that is serializable and allows access to properties using getter and setter methods.

In order to function as a javabean class, an object class must obey certain conventions about method naming, construction and behavior.

The required conventions are:

  • The class must have a public default constructor. This allows easy instantiation.
  • The class properties must be accessible using get , set and other methods.
  • The class should be serializable. This allows applications and frameworks to reliably save, store and restore the bean's state.



Spring Bean:

A spring bean is basically an object managed by Spring. More specifically, it is an object that is instantiated, configured and otherwise managed by a Spring framework container.

Spring beans are defined in Spring configuration files (or more recently with annotations) instantiated by spring containers and then injected into applications.


@Transactional annotation: 

It describes a transaction attribute on an individual method or on a class.

@Transactional annotation defines the scope of a single database transaction. It provides declarative transaction management.

JPA doesn't provide any declarative transaction management.
When using JPA outside of a dependency injection container, transactions need to be handled programmatically by the developer.

UserTransaction utx = entityManager.getTransaction();

try{
     utx.begin();
    businessLogic();
    utx.commit();
}
catch(Exception ex){
    utx.rollback();
    throw ex;
}

This way of managing transactions makes the scope of transaction very clear in the code, but it has several disadvantages:


  • It's repetitive and error prone
  • Decreases the readability of the code base

But using @Transactional:

@Transactional
public void businessLogic(){
    // use entity manager inside a transaction 
}


By using @Transactional, many important aspects such as transaction propagation are handled automatically. In this case, if another transactional method is called by businessLogic(), that method will have the option of joining the ongoing transaction.

What does @Transactional mean?

There are 2 concepts in this:

  • Persistence Context
  • Database Transaction
The database transaction happens inside the persistence context.
The persistence context is in JPA the EntityManager, implemented internally using an Hibernate Session(when using Hibernate as the persistence provider).

The persistence context is just a synchronizer object that tracks the state of a limited set of java objects and makes sure that changes on those objects are eventually persisted back into the database.

@Transactional annotation should be used @ Service layer. It is the one that knows about units of work and use cases.
A @Service is a contract and modification in presentation layer or repository layer should not require a rewrite of @Service code. 

Generally , DAO layer should be as light as possible and should exist solely to provide a connection to the DB, sometimes abstracted so different DB backends can be used together.



Service Layer:


This layer provides code modularity , the business logic and rules are specifies in the service layer which in  turn calls DAO layer , the DAO layer then responsible only to interacting with DB.

Service layer provides loose coupling between Controller and DAO layer.
Suppose Controller has 50 methods and they call 20 methods of DAO layer. And if we change these DAO methods , then we need to change the 50 methods of Controller.

But if we take 20 methods in Service layer to call 20 methods in DAO layer, then we just need to change these 20 methods of Service layer.










Repository Layer:

This layer gives additional level of abstraction over data access. Repository layer exposes basic CRUD operations.

A Repository is a data access pattern in which data transfer objects are passed into a repository object that manages CRUD operations.


That's all for this post.

Thanks for reading !!





Tuesday, March 17, 2020

Microservices Interview - 2 : Client Side Discovery/Server Side Discovery

Microservices Interview - 2 : Client Side Discovery/Server Side Discovery


As, we know, services need to call one another. In a monolithic application, services invoke each other through language-level method or procedure calls.

In a traditional distributed system deployment , services run at fixed, well known locations, so can easily call one another using REST API or some RPC mechanism.

However a microservice-based application runs in a virtualized or containerized environments  where the number of instances of a service and its locations change dynamically.

So, the problem is:

How does the client service - API Gateway or another service - discover the location of a service instance?

Forces:

  • Each instance of a service exposes a remote API such as HTTP/REST at a particular location.
  • The number of service instances and their location change dynamically.
  • Virtual machines and containers are usually assigned dynamic IP addresses. 

Solution:

Client Side Service Discovery:

When making a request to a service, the client obtains the location of the service instance by querying a service registry, which knows the locations of all service instances.

  





So, in Client Side Service discovery, Client service - API Gateway send request to Service registry to find a service and then load balance the request to any instance of that service.



Server Side Service Discovery:





In server side service discovery, client service - API Gateway sends request to a Router a.k.a. Load Balancer which in turn send a query to service registry ,finds the service and propagates request to appropriate instance.
  


Benefits of server side service discovery:

  • Compared to client side discovery, the client code is simpler since it does not have to deal with discovery. Instead, a client just makes a request to the router.
  • Some cloud environments provide this functionality , e.g.: AWS Elastic Load Balancer.







Monday, March 16, 2020

Microservice Interview -1 : API Gateway /Backends for Frontends

Microservice Interview -1


Question 1:

What is the API Gateway pattern?


Answer:

API Gateway is the entry point for all clients. The API Gateway handles requests in one of two ways:


  • Some requests are simply proxied/routed to the appropriate service.
  • It handles other requests by fanning out to multiple services.







Rather than provide a one-size-fits-all style API, the API gateway can expose a different API for each client.


Variation : Backends for Frontends:

A variation of API Gateway pattern is the Backends for Frontends.
It defines a separate API Gateway for each kind of client.






In this example, there are 3 different types of APIs for 3 different types of clients.

Benefits of using API Gateway Pattern:

  • Separates the clients from how the application is divided into microservices.
  • Frees the clients from determining the location of microservices.
  • Providing the optimal API for each client.
  • It reduces the number of requests/round trips. e.g. : The API gateway enables the clients to retrieve data from multiple services with a single round-trip. Fewer round-trips means less requests from client and less overhead.
  • Simplifies the client by moving the logic for calling multiple services from client to API Gateway.

Limitations of using API Gateway Pattern:

  • Increased complexity: API Gateway needs to be developed, deployed and managed.
  • Increased response time due to the additional network hop through the API Gateway.

Security using API Gateway's Access Token Pattern:


Problem: How to implement the identity of the requestor to the services that handle the request?

Solution:

The API Gateway authenticates the request and passes an access token [e.g. JSON Web Token] that securely identifies the requestor in each request to the services. A service can include the access token in requests it makes to other services.

This pattern has following benefits:

  • The identity of the requestor is securely passed around the system.
  • Services can verify that the requestor is authorized to perform the operation. 








Saturday, March 14, 2020

Technical Architect interview in Incedo - Part - I

Hi friends,

In this post, I'm sharing Techincal architect interview questions-answers asked in Incedo.

You can also go through other interviews posts here:





Question 1:

What is database sharding? Why we need database sharding?


Answer:

Sharding involves breaking up one's data into two or more smaller chunks, called logical shards. The logical shards are then distributed across separate database nodes, referred to as physical shards, which can hold multiple logical shards.

Sharding is a method of splitting and storing a single logical dataset in multiple databases.
Sharding adds more servers to a database and automatically balances data and load across various servers. These databases are called shards.


Sharding is also referred as horizontal partitioning. The distinction of horizontal vs vertical comes from the traditional tabular view of a database. 




Question 2:

What are the types of database sharding?

Answer:


A database can be split vertically — storing different tables & columns in a separate database, or horizontally — storing rows of a same table in multiple database nodes.













Example of Vertical partitioning:

fetch_user_data(user_id) -> db["USER"].fetch(user_id)
fetch_photo(photo_id) -> db["PHOTO"].fetch(photo_id)

Example of Horizontal partitioning:

fetch_user_data(user_id) -> user_db[user_id % 2].fetch(user_id)


Vertical sharding is implemented at application level – A piece of code routing reads and writes to a designated database.
Natively sharded DB’s are : Cassandra, MongoDB

Non-sharded DB’s are : Sqlite, Memcached etc.


When we need to do sharding?

Answer:

·         When the data set outgrows the storage capacity of a single MongoDB instance.
·         When a single MongoDB instance is unable to manage write operations.




Question 3:

How to implement sharding in MongoDB?

Answer:


When deploying sharding, we need to choose a key from a collection and split the data using the key's value.

Task that the key performs:


  • Determines document distribution among the different shards in a cluster.




Choosing the correct shard key:

To enhance and optimize the performance, functioning and capability of the DB, we need to choose the correct shard key.


Choosing correct shard key depends on two factors:

  •          The schema of the data
  •          The way database applications query and perform write operations.



Using range-based Shard key:

In range-based sharding, MongoDB divides data sets into different ranges based on the values of shard keys. In range-based sharding, documents having "close" shard key values reside in the same chunk and shard.







Data distribution in range-based partitioning can be uneven, which may negate some benefits of sharding.

For example, if a shard key field size increases linearly, such as time, then all requests for a given time range will map to the same chunk and shard. In such cases, a small set of shards may receive most of the requests and system would fail to scale.


Hash-based sharding:

In this , MongoDB first calculates the hash of a field’s value and then creates chunks using those values. In this sharding, collections in a cluster are randomly distributed.

No real schema is enforced:
  •          We can have different fields in every document if we want to.
  •          No single key as in other databases:

o   But we can create indices on any fields we want, or even combinations of fields.
o   If we want to shard, then we must do so on some index.






Question 4:

What is the use of Amazon EC2? What are the steps to deploy on EC2?

Answer:

Amazon EC2 : Amazon Elastic Compute Cloud

It offers ability to run applications on the public cloud.

It eliminates investment for hardware. There is no need to maintain the hardware. We can use EC2  to launch as many virtual servers as we need.

Steps to deploy on EC2:

·         Launch an EC2 instance and SSH into it. Note: This instance needs to be created first on AWS console[console.aws.amazon.com]. And we should also have certificate to connect to EC2 instance.

·         Install Node on  EC2 instance, if our app is in Angular.

·         Copy paste code on EC2 instance and install dependencies.

·         Start server to run.


OR:

  •          Build out Spring boot app in our own computer. Make .jar file.
  •          Upload this .jar on S3
  •          Create EC2 instance.
  •          SSH into it from our computer.
  •          Now, we are in EC2 instance.
  •          We can install JDK now.
  •          And using java - .jar file path, we can run our application.





Question 5:

How will we use Amazon S3?

Answer:

For using S3, we need to first choose S3 from AWS console and then we need to create a bucket in that for storing our files.

After creating the bucket, we need to click on the upload option and select jar file or any file from our computer and upload it to S3.

While uploading file to S3, we need to provide it some access level so that we can download it from S3 to EC2 instance.

So before that we need to make EC2 instance up and running.
And then we need to connect from local to EC2 instance using private key that we have. For connecting to EC2, we need to do SSH login [from terminal] using some certificate and using some SSH command.

Now we need to install java on EC2 [As , it is not there by default]. After that we can download our jar file from S3 to EC2 instance.

Now just run this jar using java -jar <filename> to run the springboot application.




Question 6:

What are the best Code Review Practices?

Answer:


Clean Code
·         Use intention-revealing names
·         Use Solution-Problem domain names
·         Classes should be small
·         Functions should be small
·         Functions should do one thing
·         Don’t repeat yourself(Avoid duplication)
·         Explain yourself in code : Comments
·         Use Exceptions rather than return codes
·         Don’t return Null



Security
·         Make class final if not being used for inheritance
·         Avoid duplication of code
·         Minimize the accessibility of classes and members
·         Document security related information
·         Input into a system should be checked for valid data size and range
·         Release resources (Streams, Connections) in all cases
·         Purge sensitive information from exceptions
·         Don’t log highly sensitive information
·         Avoid dynamic SQL, use prepared statement
·         Limit the accessibility of packages, classes, interfaces, methods and fields
·         Avoid exposing constructors of sensitive classes
·         Avoid serialization of sensitive classes
·         Only use JNI when necessary




Performance
·         Avoid excessive synchronization
·         Keep synchronized sections small
·         Beware the performance of String concatenations
·         Avoid creating unnecessary objects

General
·         Don’t ignore exceptions
·         Return empty arrays or collections , not nulls
·         In public classes, use accessor methods not public fields
·         Avoid finalizers
·         Refer to objects by their interfaces
·         Always override toString()
·         Document thread safety
·         Use marker interfaces to define types

Static Code Analysis

·         Check static code analyzer report for the classes added/modified



Question 7:

What are the types of Http Error codes?

Answer:

There are 5 types of Error codes:

1XX Informational:

100: Continue


2XX Success:

200 : OK
201 : Created
202 : Accepted
204 : No Content


3XX Redirection:


4XX Client Error:

400 : Bad Request
401 : Unauthorized
402 : Payment Required
403 : Forbidden
404 : Not Found


5XX Server Error:

500 : Internal Server Error
501 : Not Implemented
502 : Bad Gateway
503 : Service Unavailable [Website's server is simply not available]



Question 8:

What are the Asymptotic Notations?

Answer:

To calculate running time complexity of an algorithm, we use following asymptotic notations:





Big Oh Notation O:

The notation O(n) is the formal way to express the upper bound of an algorithm's running time.
It measures the worst case time complexity or the longest amount of time an algorithm can possibly take to complete.




For example, for a function f(n)

O(f(n)) = { g(n) : there exists c > 0 and n0 such that f(n) <= c.g(n) for all n > n0. }



Omega Notation:

The Omega notation is the formal way to express the lower bound of an algorithm's running time. It measures the best case time complexity or the best amount of time an algorithm can possibly take to complete.




For example, for a function f(n)






Theta Notation:

The notation theta (n) is the formal way to express both the lower bound and upper bound of an algorithm's running time.It is represented as follows:


    





That's all for this interview post.
Hope this post helps everybody in their java Technical interviews.

Thanks for reading!!


CAP Theorem and external configuration in microservices

 Hi friends, In this post, I will explain about CAP Theorem and setting external configurations in microservices. Question 1: What is CAP Th...