AWS serverless application model(SAM) and AWS toolkit

Featured

SAM : The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines per resource, you can define the application you want and model it using YAML. During deployment, SAM transforms and expands the SAM syntax into AWS CloudFormation syntax, enabling you to build serverless applications faster. ref – AWS doc.

AWS toolkit : The AWS Toolkit is an open source plug-in that makes it easier to create, debug, and deploy Java and Python applications on Amazon Web Services. Apart from this AWS toolkit provides explorer in IDE like: CloudFormation, CloudWatch logs, DynamoDB, ECR, ECS, Lambda, S3, SqS and Schemas etc.

Assuming you have AWS CLI already install on your machine , in case if not installed, please follow this link,

Here is quick command sharing for MacOs.

#brew install awscli
#aws --version

Assuming you have already created your AWS account, now here we are going to configure aws account credentail on local machine for later use. here we will create default profile so it will be auto detected by AWS toolkit.

#aws configure
##it will ask you verious input as below
AWS Access Key ID [None]: xxxxxxxx
AWS Secret Access Key [None]: xxxxxxx
Default region name [None]: xxxxx
Default output format [None]: Json 

Now, AWS CLI is configured after above command successfully executed.

Let’s installed SAM on local machine.
(assuming you have homebrew install on mac machine)

#brew tap aws/tap
#brew install aws-sam-cli
#sam -version

In case if you are windows or Linux user please visit this link , here you will find detail installation steps.

Now, if you have successfully installed SAM on local machine, I would suggest you to install “AWS TOOLKIT” plugin into your IDE, in my case it is intellij Idea.

Once AWS toolkit successully installed, you will see the “AWS toolkit” section in bottom left corner of intelliJ idea, once you clicked on this you will see similar window.

Note – We require SAM pre-installed before AWS toolkit to generate SAM project.

Let’s jump on IDE to generate sample AWS serverless project.

Intellij Idea -> File -> Projects -> AWS -> Next -> …

after clicking on ‘create’ button you will see “HelloWorld” project generated.


This project contains source code and supporting files for a serverless application that you can deploy with the SAM CLI. It includes the following files and folders.

  • HelloWorldFunction/src/main – Code for the application’s Lambda function.
  • events – Invocation events that you can use to invoke the function.
  • HelloWorldFunction/src/test – Unit tests for the application code.
  • template.yaml – A template that defines the application’s AWS resources.

The application uses several AWS resources, including Lambda functions and an API Gateway endpoints. These resources are defined in the template.yaml file. You can update the template to add AWS resources through the same deployment process that updates your application code. more details you can find in README.md file

Let’s build and deploy project –

#sam build
output:
Build Succeeded

Built Artifacts  : .aws-sam/build
Built Template   : .aws-sam/build/template.yaml

Now sample project is successfully bulid , lets deploy this project, currently we don’t know about the further steps and input so we will use below command with –guided option, here you will asked multiple quesiton by SAM related to env.

#sam deploy --guided
OutPut:
Configuring SAM deploy
======================

        Looking for config file [samconfig.toml] :  Not found

        Setting default arguments for 'sam deploy'
        =========================================
        Stack Name [sam-app]: HelloWorld
        AWS Region [ap-south-1]: ap-south-1
        Confirm changes before deploy [y/N]: y
        Allow SAM CLI IAM role creation [Y/n]: Y
        Disable rollback [y/N]: N
        Save arguments to configuration file [Y/n]: y
        SAM configuration file [samconfig.toml]: 
        SAM configuration environment [default]: 

        Looking for resources needed for deployment:
         Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-hyha0acabvgp
         A different default S3 bucket can be set in samconfig.toml

        Saved arguments to config file
        Running 'sam deploy' for future deployments will use the parameters saved above.
        The above parameters can be changed by modifying samconfig.toml
        Learn more about samconfig.toml syntax at 
        https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-config.html

After providing all the inputs , it will initiate deployment of serverless application and provide inline status related to resource creation. let’s visit on aws console page to view, if stack is created or not.

Resources create with helloWorld example,

Now we have verified created resource and they are working fine, lets delete these resources to avoid unnsessery cost.

#sam delete --stack-name HelloWorld
output- 
 Are you sure you want to delete the stack HelloWorld in the region ap-south-1 ? [y/N]: y
        Are you sure you want to delete the folder HelloWorld in S3 which contains the artifacts? [y/N]: y
        - Deleting S3 object with key HelloWorld/6f1511bb62dde9a7dc63aab2fdc56321
        - Deleting S3 object with key HelloWorld/70a14d89c5765111d1922ecf99ce00a5.template
        - Deleting Cloudformation stack HelloWorld

Deleted successfully

Here I have tried to quickly present the simple way to kick start serverless project. let me know if you think any specfic steps should be included into this.

Advertisement
Featured

Kafka: Kafka consumer with SpringBoot

After successful configuration of producer with spring-boot. In this post we will configure consumer with spring-boot.

Let’s get started.

Step 1: Start the Zookeeper and Kafka server on your local.

Step 2: Create a spring boot project with Kafka dependencies.

Create a spring boot project, and add below dependencies in your build.gradle / pom.xml

implementation group: 'org.apache.kafka', name: 'kafka-clients', version: '2.6.0'
implementation group: 'org.springframework.kafka', name:'spring-kafka'

Step 3: Consumer application properties

server.port=6000
kafka.bootstrap.server=localhost:9092
kafka.topic.name=greetings
kafka.group.id=G1

Step 4: Consumer Configuration

We need to create ConsumerFactory bean and KafkaListnerContainerFactory bean. Kafka consumer configuration class requires @EnableKafka annotation to detect @KafkaListener annotation in spring managed beans.

@Configuration
@EnableKafka
public class KafkaConsumerConfig {
    private static Logger log = LoggerFactory.getLogger(KafkaConsumerConfig.class);

    @Value(value = "${kafka.bootstrap.server}")
    private String bootstrapAddress;

    @Value(value = "${kafka.topic.name}")
    public String topic;

    @Value(value = "${kafka.group.id}")
    private String kafkaGroupId;

    @Bean
    private ConsumerFactory consumerFactory() {
        log.info("Initializing consumer factory ...");
        Map props = new HashMap<>();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaGroupId);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        return new DefaultKafkaConsumerFactory<>(props);
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory factory =
                new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        return factory;
    }

Step 5: Implement listener to consume messages

@Service
public class KafkaConsumerListener {
    private static Logger log = LoggerFactory.getLogger(KafkaConsumerListener.class);

    @KafkaListener(topics = "${kafka.topic.name}", groupId = "${kafka.group.id}",
            containerFactory = "kafkaListenerContainerFactory")
    public void consumeGreetings(@Payload String greetings, @Headers MessageHeaders headers) {
        log.info("Message from kafka: " + greetings.toString());
    }

Spring supports one listener can listen from multiple topics.

@KafkaListener(topics = "topic1, topic2", groupId = "G1")

Also multiple listeners can be implemented for same topic. But listeners should be from different groups.

Summery :

In this post I have shown you how to configure Kafka consumer and consume messages from the topic in a spring-boot application.

Featured

Generate Spring project base template in Java/Kotlin/Groovy

There is various way we can generate REST application base template in java. Today I’m going to share you one of the example.

Why we require to generate the template ?

Going with your own project structure is good for example or test project. but if we talking about production application or some of serious project we must require to follow the standard. generally if we talk about Framework like spring that have certain rules to read application properties and they have defined standard project structure. and that is very use-full and self explanatory.

you can see on of this example, where clearly defined ‘src’ section that contain test and main application source. apart from this there is HELP.md where user can define there application details. also there is gradle config files are there that can be replace with maven if you choose maven build system.


├── HELP.md
├── build.gradle
├── gradle
│   └── wrapper
│       ├── gradle-wrapper.jar
│       └── gradle-wrapper.properties
├── gradlew
├── gradlew.bat
├── settings.gradle
└── src
    ├── main
    │   ├── java
    │   │   └── com
    │   │       └── example
    │   │           └── demo
    │   │               └── DemoApplication.java
    │   └── resources
    │       └── application.properties
    └── test
        └── java
            └── com
                └── example
                    └── demo
                        └── DemoApplicationTests.java

Now these are The basics template every time create these structure manually is not worth to spend time. if there is already tools out there.
Spring introduced spring initialiser that can help you generate base template where developer only require to generate and import into IDE.

How to do this?

visit – https://start.spring.io/

Hope you like this 🙂

Spring boot Admin

Featured

When the actuator introduced, the first thing came into my mind – What if we have a common place where I can see my all applications endpoints and I can manage them from there itself?

Thanks to the code-centric team they make this possible. if you are not aware of the Actuator, I would suggest you to first read about the actuator library.

Managing application using actuator endpoint is quite difficult, because if you have bunch of applications you won’t have a common place to see them all. But now with the help of Spring boot admin, it’s easier to view the dashboard for multiple microservices at a single place.

In this articles I’m going to describe you basic example of Spring boot admin client and server. CodeCentric has Introduce Spring boot Admin Server and it has inbuilt UI Dashboard that show the clients details. Isn’t it really amazing where you can see all your available clients at one place? Let’s see how we can implement this.

How it works?

Let’s do server and client setup

Spring boot admin server setup

First go to spring initialiser and generate sample project

start.spring.io screenshot to generate project

Once project generated you will find below dependencies into your build.gradle file.

'de.codecentric:spring-boot-admin-starter-server'

Now let’s enable Admin server in project using @EnableAdminServer annotation

@SpringBootApplication
@EnableAutoConfiguration
@EnableAdminServer
public class SbadminserverApplication {

	public static void main(String[] args) {
		SpringApplication.run(SbadminserverApplication.class, args);
	}

}

Logically dashboard should be secure, So let’s include basic authentication using Spring Security

implementation 'org.springframework.boot:spring-boot-starter-security'
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@Configuration(proxyBeanMethods = false)
public class SecuritySecureConfig extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.formLogin().loginPage("/login").permitAll();
        http.logout().logoutUrl("/logout").permitAll();
        http.csrf().ignoringAntMatchers("/actuator/**", "/instances/**", "/logout");
        http.authorizeRequests().antMatchers("/**/*.css", "/assets/**", "/third-party/**", "/logout", "/login")
                .permitAll();
        http.authorizeRequests().anyRequest().authenticated();
        http.httpBasic(); // Activate Http basic Auth for the server
    }
}

Now assign default password to your server, if we don’t define Spring security password it will generate a random password at runtime and you can see the generated password in console.

spring.security.user.name=admin
spring.security.user.password=admin

If you want to update server ‘Titles’ you can use below properties.

spring.boot.admin.ui.title=AdminConsole

Spring boot admin client setup

Generate client project

Once the project generated, configure server details into client properties files, make sure if you running both on the same machine, the port should be different to avoid a port binding error.

spring.application.name=sb-client
server.port=8888
management.endpoints.web.exposure.include=*

spring.boot.admin.client.url=http://localhost:8080
spring.boot.admin.client.username=admin
spring.boot.admin.client.password=admin

Now run both of the project and login into the server.

Spring boot admin server dashboard

Source link

Summary : If you are looking for a prebuilt dashboard for your microservices, you can consider it one of the options. As this project is under an open-source umbrella you can update some of the features as per your requirement.

Hope you like this 🙂

Featured

Spring boot actuator

Actuator

An actuator is a spring boot sub-project that helps to expose production-ready support features against Spring boot application.

Key features offered by actuator

  • Health check : You can use health endpoint to check the status of your running application.
  • Monitoring and Management over HTTP/JMX : Actuator support HTTP endpoint as well as Java Management Extensions (JMX) to provide a standard mechanism to monitor and manage applications.
    • Logger: It provide feature to view and update the logs level.
    • Metrics: Spring Boot Actuator provides dependency management and auto-configuration for Micrometer, an application metrics facade that supports numerous monitoring systems.
    • Auditing: Once Spring Security is in play, Spring Boot Actuator has a flexible audit framework that publishes events (by default, “authentication success”, “failure” and “access denied” exceptions). This feature can be very useful for reporting and for implementing a lock-out policy based on authentication failures.
    • Http Tracing: HTTP Tracing can be enabled by providing a bean of type HttpTraceRepository in your application’s configuration. For convenience, Spring Boot offers an InMemoryHttpTraceRepository that stores traces for the last 100 request-response exchanges
    • Process Monitoring

Enable Actuator into Spring boot project

You can enable Actuator into Spring boot project by including below dependency.

//Gradle
org.springframework.boot:spring-boot-starter-actuator:2.3.1.RELEASE
//Maven
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
    <version>2.3.1.RELEASE</version>
</dependency>

Endpoint offer by Actuator

By default ‘health’ and ‘info’ endpoint are enabled

Default exposed endpoint

Other endpoints are sensitive and not advisable to expose to the production environment without security. As we are demonstrating, let’s expose all the APIs.

management.endpoints.web.exposure.include=*

Include and Exclude Endpoint

Even you can include or exclude endpoint by defining below properties

# wild card to include/exclude all
management.endpoints.web.exposure.include=* 
management.endpoints.web.exposure.exclude=* 

# you can include specific properties like below
management.endpoints.web.exposure.include=env,beans
management.endpoints.web.exposure.exclude=heapdump

Customise management server address

You can customize the management server port, it will help you define the limited scope to the ports.

management.server.port=8081
management.server.address=127.0.0.1

Expose custom endpoint

Any methods annotated with @ReadOperation@WriteOperation, or @DeleteOperation are automatically exposed over JMX and HTTP. Even you can expose technology specific endpoint by using @JmxEndpoint or @WebEndpoint.

Here i’m share you example for exposing endpoint using Spring boot 2.x

import org.springframework.boot.actuate.endpoint.annotation.ReadOperation;
import org.springframework.stereotype.Component;

@Component
@org.springframework.boot.actuate.endpoint.annotation.Endpoint(id = "say-hello")
public class Endpoint {

    @ReadOperation
    public String sayHello()
    {
        return "Hello World";
    }

}

Summary : Spring boot actuator is one of the best libraries you can add in your application to enable production-ready features in less effort. it offers key features that can be used in day to day production support.

Featured

Secure properties with spring cloud config

Overview: Earlier post I was demonstrating basic of spring cloud config. Now in this post, we will see how we can use secure properties in the spring cloud config. Before continuing on this, I recommend you to first go through the previous post.

Secure properties: Almost every application has some kind of configuration that can’t be exposable, some are very sensitive and should be limited access. we generally pronounce it secure properties. 

There are multiple ways you can secure your properties like using Cerberus, Hashicorp vault with consul backend, Cyberark password vault, and aim, confidant, Credstash etc. Here we are going to use the simplest way that may not as powerful as the above tools but secure and very easy to use.

Let’s implement this, in a sample code,

First, generate a key store. assuming you are aware of keytool.

keytool -genkeypair -alias mytestkey -keyalg RSA \ -dname "CN=Web Server,OU=Unit,O=Test,L=City,S=State,C=IN" \ -keypass changeme -keystore server.jks -storepass testPassword

Then place server.jks to resource folder in cloud config server project, after that edit bootstrap.properties(or .yaml). and add below properties.

Properties Description
encrypt.keyStore.locationContains a resource(.jks file) location
encrypt.keyStore.passwordHolds the password that unlocks the keystore
encrypt.keyStore.aliasIdentifies which key in the store to use
encrypt.keyStore.secret secret to encrypt or decrypt

e.g

encrypt.keyStore.location=server.jks encrypt.keyStore.password=testPassword encrypt.keyStore.alias=mytestkey encrypt.keyStore.secret=changeme

That’s it and now restart config server. 

There is encryption and decryption endpoint expose by config server. 

Let’s take a simple example, where we will try to encrypt and decrypt ‘testKey’.

#curl localhost:8888/encrypt --data-urlencode testKey

output:

AQAYqm8ax79kPFGT0sOvV8i8uN0GDLsToULmflVNYKf95bpyAKLIV4eCFVdNJpgb7SyS808a3uTjvQBj1SrIwFlQktRpln8ykpWUG3NdPM6aPf5k4yRhNkG43S5lCckmyLTH8CIzoSSFQeKoFuk4zPiAPTMchTP9qtAYG2EwbdWU1/a9xqoDJb9OQbSsEr0wp2Ud+HlG02NGF2qmhxL7kW5BJxTsGdZG2J8qwhkPYreYF6UQlehmheWCAJBzfBw4peT9LOxi7rA0sHD78xle7Bahziyc+WOETADloKfSowERNY5FCOe4/ywhcHpJuCk+6NPok3KVI+jMTXdSpqMmfxBNc764hHjlhpablwNcRPDv8XGCdstdy4Esb9/eXTZgh0g=

#curl localhost:8888/decrypt --data-urlencode AQAYqm8ax79kPFGT0sOvV8i8uN0GDLsToULmflVNYKf95bpyAKLIV4eCFVdNJpgb7SyS808a3uTjvQBj1SrIwFlQktRpln8ykpWUG3NdPM6aPf5k4yRhNkG43S5lCckmyLTH8CIzoSSFQeKoFuk4zPiAPTMchTP9qtAYG2EwbdWU1/a9xqoDJb9OQbSsEr0wp2Ud+HlG02NGF2qmhxL7kW5BJxTsGdZG2J8qwhkPYreYF6UQlehmheWCAJBzfBw4peT9LOxi7rA0sHD78xle7Bahziyc+WOETADloKfSowERNY5FCOe4/ywhcHpJuCk+6NPok3KVI+jMTXdSpqMmfxBNc764hHjlhpablwNcRPDv8XGCdstdy4Esb9/eXTZgh0g=

output:

testKey

As our keystore file containing public and private key so we can able to encrypt and decrypt properties.

In the case of config client, we do not have to do any extra step except below one, whenever we use encrypted property that has to start with ‘{cipher}’, for example

user.password={cipher}5lCckmyLTH8CIzoSSFQeKoFuk4zPiAPTMchTP9qtA

Caution: encrypted data should not be within single or double quotes.

In a case when a client wants to decrypt configuration locally

First,

If you want to encrypt and decrypt endpoint not work, Then comment all properties that start with encrypt.* and include the new line as below

spring.cloud.config.server.encrypt.enabled=false

Include keystore(.jks) file in client project and update below properties in bootstrap.properties(or .yaml) file

encrypt.keyStore.location=server.jks
encrypt.keyStore.password=testPassword
encrypt.keyStore.alias=mytestkey
encrypt.keyStore.secret=changeme

That’s all, now client project not going to connect with the server to decrypt properties.

Summary: We can secure our external properties using spring cloud config in fewer efforts, That may easily fulfill small or mid-scale project requirement.

Hope you like this 🙂

Featured

Spring cloud config

Overview: In this tutorial, we will cover the basics of Spring cloud config server and client, where you will set up your cloud config server and access configuration through client services.

What is Spring cloud config?

Spring cloud config provides server and client side support for externalized configuration a distributed system.

Why Spring cloud config?

External configuration is the basic need of almost any application. Now we are leaving in the microservices world, that need external configuration more often. Almost every individual application has some external and dynamic properties that keep updating based on the environment or some period of time.  In case of the massive size of services, it is hard to manage or left with creating a custom solution for it.

The dilemma is how we can make our application to adopt dynamic changes more often without investing time into it and with zero downtime. Here spring cloud config comes into the picture. It will help a user to manage their external properties from multiple sources like git, local file system etc. Transfer data into encrypted form and lot more.

How it works?

Screen Shot 2018-08-26 at 9.22.06 PM.png

Above diagram, you can see multiple services that get configuration from config server where config server sync with GIT or any other VCS to get new changes.

Let’s create a simple application, that will take a message from the config server and return through the endpoint.

Implementation of spring cloud config server:

Now I’m going to visit  spring initializer page to generate demo project, where I’ve included dependencies of spring ‘web’ and ‘config server’.

Once project imported into IDE, enable config server with ‘@EnableConfigServer’ annotation

@SpringBootApplication
@EnableConfigServer
public class DemoApplication {

   public static void main(String[] args) {
      SpringApplication.run(DemoApplication.class, args);
   }
}

In this example, I’m using local git  for demo purpose

#mkdir cloud-config
#git init
#touch config-client-demo.properties
#vi config-client-demo.properties 
message=Hello World
//wq (write and quite vi editor)
#git add config-client-demo.properties
#git commit -m "initial config"

By default, config server starts on port 8080 as spring boot application.

Here is config server application.properties

server.port=8888
spring.cloud.config.server.git.uri=${HOME}/Desktop/cloud-config

server.port: We use this property to define server port if you do not define it will set 8080 as a default port. Here I’m using the same machine for client and server so one of the port needs to be updated.

spirng.cloud.config.server.git.uri: We define this property to fetch configuration detail from git repository.

Implementation of client application:

Now I’m again going to generate a template application using spring initializer where included ‘web’, ‘client config’ and ‘actuator’ as dependencies. Here ‘actuator’ help us to refresh configuration.

Below is message controller that return message and that message came from config properties, you will notice @RefreshScope annotation that helps to refresh configuration.

@RestController
@RefreshScope
public class MessageRestController {

    @Value("${message}")
    private String message;

    @GetMapping("/message")
    String getMessage() {
        return this.message;
    }
}

Then rename your ‘applcation.properties‘ file to ‘bootstrap.properties‘ and include below config,  I’ve defined application name(name of application should accurate to fetch specific data), Configuration URI(which your config server URL that holds properties details) and expose all actuator endpoint that by default disabled

spring.application.name=config-client-demo
spring.cloud.config.uri=http://localhost:8888

management.endpoints.web.exposure.include=*

spring.application.name: Accurate name of the configuration file that defined on config server.

spring.cloud.config.uri: Base url of config server

management.endpoints.web.exposure.include: By default actuator’s most features are disabled so use a wildcard to enable all. That actually not needed in case of production.

Run your application and you will see below message.

GET http://localhost:8080/message

Response :
Hello world

Let’s update the properties file, I’m going to update ‘Hello World’ to ‘Hello Test’

#vi config-client-demo.properties
message= Hello Test
...
#git add config-client-demo.properties
#git commit -m "update message"
//Once file updated then 'refresh' configuration

Once you included the @RefreshScope annotation that will expose endpoint for you to refresh application, check below.

POST http://localhost:8080/actuator/refresh

Now again hit your message API

GET http://localhost:8080/message
Response:
Hello Test

checkout example at git

Conclusion: We can use spring cloud config to any kind of application for external, distributed and centralized configuration server. There is no limitation of technology and programming languages. You can apply spring cloud config in other languages as well.

Hope you like this tutorial.

 

Featured

Getting started with MongoDB

mongodb_logo.jpg

Hello Everyone,

In this post, I am going to present some of the basic syntax and example of MongoDB to get started with it. For basic detail of NoSQL db visit this link.

What is MongoDB?

Mongo db is an open source, cross-platform NoSql database. It is a document-oriented db which is written in C++.

Mongo db stores its data on the filesystem. It stores all the data in BSON (Binary JSON). The format of BSON documents is very similar to the Object Oriented Programming. In MongoDB we can store complete information in one document rather than creating different tables for them and then define a relationship between them.

Let’s take a brief look at terms which mongo db uses to store the data:

  • Collections: You need to create collections in each Database. Each DB can contain multiple collections.
  • Documents: Each Collection contains multiple Documents.
  • Fields: Each Document contains multiple Fields.

 

Now get started with the commands:

Database:

  • Create db or Use db: There is no command to create a db in Mongo. Whenever we want to create a new db use following command.
    • Syntax: use <dbname>
    • Example: use customerdb
  • Show current db : This is very simple and small command
    • Syntax: db
  • Show db : This command will return the existing dbs only if it contains at least one collection.
    • Syntax: show dbs

At this time we did show dbs it will return only a default db. For this, we need to add a collection to it. We will see how to add collection in collection section. But for now, presenting an example:

    • Example: db.customer({first_name:”Robin”}
    • show dbs

Now we can see our db customerdb in the list.

  • Drop db: To drop database following is the command. Before deleting database first select the db.
    • Syntax: db.dropDatabase()
    • Example: use customerdb
      db.dropDatabase()

Collections

  • Create collection : Mongo db normally we do not need to create collection explicitly. When we write a command to insert the document it will create the collection if does not exist. But there is a way to create collection explicitly an define it as expected:
    • Syntax: db.createCollection(<collectionName>, option)
      db.createCollection(<name>, { capped: <boolean>,
      autoIndexId: <boolean>,
      size: <number>,
      max: <number>,
      storageEngine: <document>,
      validator: <document>,
      validationLevel: <string>,
      validationAction: <string>,
      indexOptionDefaults: <document>,
      viewOn: <string>,
      pipeline: <pipeline>,
      collation: <document> } )
    • Example: db.createCollection(customer)

Collection name’s type is String and option type is Document.
Some of the important fields I am describing below:

Field(optional) Type Description
capped boolean If it sets to true it creates capped collection. For capped collection we need to define the size as well.
size number It defines the size of the capped collection. If the documents size reached to its limit then on each insert mongo db started deleting the old entries.
max number This defines the max size of capped collection. If the size is defined less then max size mongo db will start deleting the old document. So we need to ensure that max size is always less then size.
autoIndexId boolean It automatically creates index on id.

 

  • Drop Collection: We can drop a collection by using the following index but before dropping any collection we should be in the same db.
    • Syntax: db.<collectionName>.drop()
    • Example: use customerdb
      db.customer.drop()

CRUD Operations:

Mongo db provides very flexibility for CRUD operations. We can insert or update document on the fly.

  • Insert Document:
    • Syntax: db.<collectionName>.insert(<documents>)
    • Example:  db.customer.insert([{first_name:”Robin”, last_name:”Desosa”}, {first_name:“Kanika”, last_name:”Bhatnagar”},{first_name:”Rakesh”, last_name:”Sharma”, gender:”male”}]);

In the above example we are adding 3 document, first 2 are having the same fields but the third document has an additional field gender. Mongo db provides a functionality to insert the non structural data.
When you insert a document mongo db will automatically create a unique Id for each document.

  • Update Document:
    • Syntax: db.<collectionName>.update({<documentIdentifier>}, {$set:{<update value>}})
    • Example: db.customer.update({first_name:”Robin”},  {$set:{gender:”male”}});
      db.customer.update({first_name:”Kanika”},  {$set:{gender:”female”}});db.customer.update({first_name:”Rakesh”}, {$set:{age:”25”}})The above example will add a new field in corresponding document.
  • Update or Insert: Upsert command updates the document if it already exists or inserts a new one.
    • Syntax: db.<collectionName>.upsert({<docIdentifier>}, {<document>}, {upsert:true})
    • Example: db.customer({first_name:”Amita”}, {first_name:” Amita”, last_name:”Jain”, gender:”female”}, {upsert: true});
  • Rename Field in Document: We can rename field of a specific document by using $rename in update command.
    • Syntax: db.<collectionName>. update({<documentIdentifier>}, {$rename:{<update value>}})
    • Example: db.customer.update({first_name:”Rakesh”}, {$rename:{“gender”:”sex”}});After this we renamed the gender field to sex only for the document whose first_name is “Rakesh”.
  • Remove a field: To remove a field $unset needs to be used in update command.
    • Syntax: db.<collectionName>.update({<documentIdentifier >}, {$unset:{<field:1>}})
    • Example: Db.customer.update({first_name:”Rakesh”}, {$unset:{age:1}});
  • Remove Document:
    • Syntax: db.<collectionName>.remove({<documentIdentifier >})
    • Example: db.customer.remove({first_name:”Amita”});
      (If we have multiple entries with first_name Amita and want to remove 1.)
      db.customer.remove({first_name:”Amita”}, {justOne:true});
  • Find Document: We can find the document in collection by using following command. The output of that command is an object in json form.
    • Syntax: db.<collectionName>.find()
    • Example: db.customer.find();

The output of the above command will be all the json object stored in that collection.
To see it in a formatted way like each object and field in new line we can use pretty on find.

Example: db.customer.find().pretty();

  • Find Specific: By passing the documentIdentifier value in find method.
    • Syntax: db.<collectionName>.find({<documentIdentifier >})
    • Example: db.customer.find({first_name:”Kanika”});
  • Or condition: 
    • Example: db.customer.find({$or:[{first_name:”Kanika”}, {first_name:”Robin”}]});

In the above example we have a document in find as parameter, and in that document we have give an array of first_name. $or is defining the operation which is going to be performed on the array.

  • Greater than, Less than: We can directly jump on the example of the greater than and less than.

Example:

  • db.customer.find({age:{$gt:26}});
    In the above example $gt defines that > operation need to be perform on age. It will find and print all the documents who has this field age and age >26.
  • db.customer.find({age:{$lt:26}});
    In the same way this $lt will help us to find all documents which have age field and age < 26.
  • db.customer.find({age:{$gte:26}});
    We can perform >= or <= operations as well by using $gte and $lte.

Following are some more example on features provided by Mongo db:

  • Sort:
    • db.customer.find().sort({first_name:1}); //descending order
      db.customer.find().sort({first_name:-1}); //ascending order
  • Count:
    • db.customer.find().count();
  • ForEach:
    • db.customer.find().forEach(function(doc){print(“Customer Name:”+ doc.first_name)});

These all are the basic syntax for getting started with MongoDb.

 

 

 

 

Featured

Relational to NoSQL database

Hey Everyone,

Today I’m going to share you the other experience that I’ve struggled with. I know after reading this post most of you will relate your issues with my experience and you will find a new way to deal with them.

The pain of relational database in a small project where you no need of transaction support and also not defined any complex relationship between the multiple entities.  We are very habitual with relation database we always trying to go with them. But here I’m trying to stop you, first check NoSQL, if they not meet your requirement. Then go with relational one.

Let me explain!!

What is NoSQL?

NoSql is basically a database used to manage huge sets of unstructured data, where data is not stored in a tabular relation like a relational database.

Why NoSQL?

A traditional database prefers more predictable and structured data. The technologies are increasing and we want to automate everything, in such situations we want to gather all type of data to be more predictable. Every time getting structural data is a bit difficult. Many times we need to deal with such situations where we need to store unstructured data. Here comes NoSQL as a savior.

NoSQL as per name says NOT ONLY SQL that comes with lots of more added benefit. It provides a completely new way then the Relational DB to deal with data. Managing any type of data is very simple in NoSQL then relational.

Types of NoSQL database

  • Document type: In this database, the key is paired with a complex data structured called as Document. Example: MongoDB
  • Graph stores: This types of a database is usually used to store networked data. Where we can relate data based on some existing data. Example: Neo4J
  • Key-Value stores: These are the simplest NoSql database. In this database each record store with a unique key to identify it. Example: Aerospike
  • Wide column-stores: This type of database store large data set. Example: Cassandra.

Let’s see one of the use cases (using MongoDB and MySQL):

Imagin that we have beer store database that holds beer store basic information, like beer style, beer type, and beer store addresses.

Let’s see a basic schema design in case of the relational database.

Here I have created beer_store table that holds store names then created address table that holds beer store addresses (one store can be available in multiple places)  along with country and city table. then I’ve defined beer_style and beer_type that provided by the beer stores. then defined many to many relationships between beer type and beer_store and the same in case of beer_style and beer store also.

Screen Shot 2018-04-15 at 9.06.53 PM.png

Now same beer store we can achieve in case of NoSQL (JSON-like documents). below schema.

There is two way you can design your NoSQL schema.

  1. You can store all the data into the same collection. but there is some disadvantage when some data repeat multiple times.
  2. You can store a reference to data little bit similar to SQL.

Here to avoiding confusion, I’ve separated few tables where information is fixed and that will repeat in case of another beer store record.

beer_store

Screen Shot 2018-04-15 at 2.47.28 PM.png

and other tables that hold values and references

beer_style

Screen Shot 2018-04-15 at 4.32.42 PM.png

beer_type

Screen Shot 2018-04-15 at 4.33.26 PM.png

country

Screen Shot 2018-04-15 at 2.51.41 PM.png

Query Example SQL VS NoSQL

#SQL

select * from beer_store;
//output
+------------+-------------+
| pkstore_id | name        |
+------------+-------------+
|          1 | Magic Beer  |
|          2 | Beerland    |
|          3 | Uptown Beer |
+------------+-------------+

#NoSQL

db.beer_store.find()
//output
{
	"_id" : ObjectId("5ad31853e5ce6fe732eb7300"),
	"name" : "The Beera",
	"address" : [
		{
			"address1" : "sec 4",
			"address2" : "shastri cercle",
			"city" : "jaipur",
			"country" : "India"
		}
	],
	"beerType" : [
		"Ales",
		"Lagers"
	],
	"beerStyle" : [
		"Amber",
		"Blonde"
	]
}
{
	"_id" : ObjectId("5ad4bfa2b20cf32e1fa3cb6d"),
	"name" : "BeerLand",
	"address" : [
		{
			"address1" : "sec 4",
			"address2" : "hiran magri",
			"city" : "udaipur",
			"country" : "India"
		}
	],
	"beerType" : [
		"Ales",
		"Lagers"
	],
	"beerStyle" : [
		"Amber",
		"Blonde"
	]
}

Above both SQL and NoSQL query requested for beer_store but as per both are different way persisted result return based on it.

Same NoSQL response we can achieve with relation database but for this, we have to do multiple joins, subqueries and inbuilt JSON converter.

Conclusion: Relation database has their own importance like supporting the ACID properties. they are well organized.  But many cases NoSQL appreciable, like when we do search operation, sorting operation filtering on one-to-many and many-to-many relationships. 

Hope you like this. Thanks for reading 🙂

Service discovery with Eureka

Eureka is REST based application service, That primarily used for service register and middle-tier api load balancing.

Why Eureka required ?

There is many reason where we can consider eureka
– Service registry
– Client side load balancing
– Peer to peer connectivity between server
– Maintain self preservation state in case network collapse at certain threshold
– Scope of customisation
– Mid-tier load balancing

How Eureka work?

Eureka come up with two components. Eureka client and Eureka server. Any application who doing service discovery on Eureka server should have Eureka client enabled. actually if we complete server setup there is three application that came into picture –

  • Eureka server – that hold the client details and do mid-tier load balancing
  • Application client – That is Eureka client that called to other services.
  • Application services – That is also Eureka client but called by other services.

Let’s understand with example – Suppose we have web-application that have two services. one web client that hold front end implementation. and other backend services that hold business logics. here both backend and frontend is kind of client for Eureka server and both add eureka client component so they can send their heart bit to eureka server. and In this scenarios eureka server maintain registry of both application(web client and backend services). also web client not required to directly called to backend services. that can call to eureka server. where eureka server redirect their call to specific instance as per availability. Here Eureka server use Round-Robin algorithm to redirect request of client.

Eureka server client communication and stats ?

Register – Eureka client registers the information about the running instance to the Eureka server.
Renew – Eureka client needs to renew the lease by sending heartbeats every 30 seconds. The renewal informs the Eureka server that the instance is still alive. If the server hasn’t seen a renewal for 90 seconds, it removes the instance out of its registry. It is advisable not to change the renewal interval since the server uses that information to determine if there is a wide spread problem with the client to server communication.
Fetch registry – Eureka clients fetches the registry information from the server and caches it locally. After that, the clients use that information to find other services
Cancel – Eureka client sends a cancel request to Eureka server on shutdown. This removes the instance from the server’s instance registry thereby effectively taking the instance out of traffic.
Time lag– All operations from Eureka client may take some time to reflect in the Eureka servers and subsequently in other Eureka clients. This is because of the caching of the payload on the eureka server which is refreshed periodically to reflect new information. Eureka clients also fetch deltas periodically. Hence, it may take up to 2 mins for changes to propagate to all Eureka clients.

Let’s go through example ,
Here i’m going create these applications.
1. Eureka server – that will register all the services.
2. Application Service – This will backend application called by client but its also registered with Eureka as client
3. Application Client – this will be client application that Called Application services via eureka server.

Eureka Server

I would suggest you to visit – Spring initialiser and generate Spring application from there and don’t forget to include Eureka server dependencies. for more details please visit this page to know more details.

Once you imported the project into your IDE. then go to resource folder and open application.properties/yml files. defined below bare minimum properties to make your server up and visible.

//YAML format
server:
    port: 8761

eureka:
    instance:
        hostname: localhost
    client:
        fetch-registry: false
        register-with-eureka: false
        serviceUrl:
            defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
spring:
    freemarker:
        prefer-file-system-access: false
        template-loader-path: classpath:/templates/
-------------------------------------------------------------------------
OR
// properties format

eureka.instance.hostname=localhost
eureka.client.fetch-registry=false
eureka.client.register-with-eureka=false
eureka.client.serviceUrl.defaultZone=http://${eureka.instance.hostname}:${server.port}/eureka/
server.port=8761
spring.freemarker.prefer-file-system-access=false
spring.freemarker.template-loader-path=classpath:/templates/

server.port – Server port details, Here we require to define unique port no.
eureka.client.fetch-registry – mark this as false so it will not try to fetch registry details as client
eureka.client.register-with-eureka – So it will not register them self
eureka.client.serviceUrl.defaultZone – Define default zone address for the client, so client connect at this address
spring.freemarker.template-loader-path – as UI page is by default included with Eureka server. so pointed template path in class path incase if by default not detected
spring.freemarker.prefer-file-system-access – there is no need to read local file system.

Now open application.java files and enable Eureka server using @EnableEurekaServer annotation.Then start server

@EnableEurekaServer
@SpringBootApplication
public class EurekaserverApplication {

	public static void main(String[] args) {
		SpringApplication.run(EurekaserverApplication.class, args);
	}
}

Once server up, now visit on this link – http://localhost:8761, you will see similar dashboard as below,

Eureka dashboard

Here there is no application registered with Eureka server now. Lets create Eureka Client.

Eureka Client

Same as Eureka server we require to generate project from Spring Initialiser , Once project generated and open into IDE, we require to edit application.properties/yml file.

spring:
  application:
    name: eureka-service-client

server:
  port: 8082

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8761/eureka
---------------------------------------------------------------------
OR
eureka.client.serviceUrl.defaultZone=http://localhost:8761/eureka
server.port=8082
spring.application.name=eureka-service-client

eureka.client.serviceUrl.defaultZone – Default zone for eureka client to register
server.port – server port
spring.application.name – named the application the same will be visible to eureka server

Open application.java files and @EnableDiscoveryClient.

@SpringBootApplication
@EnableDiscoveryClient
public class EurekaclientApplication {

	public static void main(String[] args) {
		SpringApplication.run(EurekaclientApplication.class, args);
	}

}

Now start the client server and check on Eureka if they are visible.

Kafka: Kafka producer with SpringBoot

In my earlier article we have seen how to produce and consume messages using terminal.

In this post i’ll show you how we can produce events/message using springboot project.

Spring also provided support for Kafka . Spring Kafka brings the simple and typical Spring template programming model with a KafkaTemplate and Message-driven POJOs via @KafkaListener annotation.

Now without any further delay let’s start implementing.

Step 1: Start the Zookeeper and Kafka server on your local.

Step 2: Create a spring boot project with Kafka dependencies.

Create a spring boot project, and add below dependencies in your build.gradle / pom.xml

implementation group: 'org.springframework.kafka', name:'spring-kafka'
implementation group: 'org.apache.kafka', name: 'kafka-clients', version: '2.6.0'

Step 3: Application configuration

We will define bootstrap server and topic name in application.properties.

server.port=7000
kafka.bootstrap.server=localhost:9092
kafka.topic.name=greetings

Step 3: Configuring Topic

You can create a topic using the command prompt or using spring boot configuration as below:

@Configuration
public class TopicConfig {

    @Value(value = "${kafka.bootstrap.server}")
    private String bootstrapAddress;

    @Value(value = "${kafka.topic.name}")
    public String topic;

    @Bean
    public KafkaAdmin kafkaAdmin() {
        Map<String, Object> configs = new HashMap<>();
        configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
        return new KafkaAdmin(configs);
    }

    @Bean
    public NewTopic topic1() {
        return new NewTopic(topic, 1, (short) 1);
    }

Step 4: Producer Configuration

In producer configuration we need ProducerFactory bean and a KafkaTemplate bean.

@Configuration
public class KafkaProducerConfig {

    @Value(value = "${kafka.bootstrap.server}")
    private String bootstrapAddress;


    @Bean
    public ProducerFactory<String, String> producerFactory() {
        Map<String, Object> configProps = new HashMap<>();
        configProps.put(
                ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
                bootstrapAddress);
        configProps.put(
                ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
                StringSerializer.class);
        configProps.put(
                ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
                StringSerializer.class);
        return new DefaultKafkaProducerFactory<>(configProps);
    }

    @Bean
    public KafkaTemplate<String, String> kafkaTemplate() {
        return new KafkaTemplate<>(producerFactory());
    }

Step 5: Publishing messages

Let’s create a rest controller which will take messages as input and publish them to kafka topic.

@RestController
@RequestMapping("/greetings")
public class MessageController {


    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;

    @Value(value = "${kafka.topic.name}")
    public String topic;

    @GetMapping("/msg")
    public void sendMessage(@RequestParam String msg) {
        kafkaTemplate.send(topic, msg);
    }

Summery:

In this post I have shown you how to created a topic and publish messages to the topic from a spring-boot application.

Kafka: Publish and Consume messages

In my earlier posts, I have explained about Kafka and the how to install and run Kafka on your system.

Now we will see how to publish and consume messages in Kafka.

Step 1: Create a Topic

As we know in Kafka publisher publish messages to topic and Kafka will decides this message will be assigned to which partition in a topic.

So first we will create a topic named greetings. For this lets open a new command prompt and navigate to bin/windows folder. Then by using kafka-topic.bat we can create a topic.

Notice the bootstrap server port 9092. This is the default port of Kafka server.

$ kafka-topics.bat --create --topic greetings --bootstrap-server localhost:9092

So now we have successfully created the topic.

We can pass the –describe parameter to kafka-topic.bat to get the information about the topic.

$ kafka-topics.bat --describe --topic greetings --bootstrap-server localhost:9092

Step 2: Publish some events

Now let’s write some message or publish some event to the topic.

To do so open a new command prompt and navigate to bin/windows and type below command.

$ kafka-console-producer.bat --topic greetings --bootstrap-server localhost:9092

Then type the message you want to publish. By default each line you enter will trigger a separate event to the topic.

We can stop the publisher any time by pressing Ctrl + C.

Step 3: Consume events

Open an another terminal and by using kafka-console-consumer.bat you will be able to consume messages.

$ kafka-console-consumer.bat --topic greetings --from-beginning --bootstrap-server localhost:9092

Great 👏

Now you are publishing and consuming messages using Kafka.

Summery :

In this article we have demonstrate how you can create topic in Kafka and produce and consume messages by using Kafka’s producer and consumer console library.

Prev -> Kafka: Install and Run Apache Kafka on windows

Kafka: Install and Run Apache Kafka on windows

Install Apache Kafka on Windows

STEP 1: Install JAVA SDK >8

For this we need java-jdk installed on our system.

STEP 2: Download and Install Apache Kafka binaries

You can download the Apache Kafka binaries from Apache kafka official page:

https://kafka.apache.org/downloads

STEP 3: Extract the binary

Extract the binary to some folder. Create a ‘data‘ folder at bin level.

Inside data folder create zookeeper and kafka folder.

STEP 4: Update configuration value

Update zookeeper data directory path in “config/zookeeper.Properties” configuration file.

With the zookeeper folder path that you have created in data.

Update Apache Kafka log file path in “config/server.properties” configuration file.

STEP 5:  Start Zookeeper

Now we will start zookeeper from command prompt. Go to kafka bin\windows and execute zookeeper-server-start.bat command with config/zookeeper.Properties configuration file.

Here we are using default properties that already bundled with kafka bindary and persist into the config folder. later we can update this according to our uses.

To validate if zookeeper starts successfully check for below logs.

STEP 6:  Start Apache Kafka

Finally we will start Apache Kafka from command prompt just in the same way we started zookeeper. Open an another command prompt, run kafka-server-start.bat command with kafka config/server.properties configuration file.

Summery:

To proceed with kafka you need install and run kafka and zookeeper server on your machine. with the above steps.

Next-> Kafka: Publish and Consume messages

Prev-> Kafka: Introduction to Kafka

Kafka: Introduction to Kafka

In this world of data where things and systems started depending on data, it is very important to get the right data at a right time to get the most of it. In this a great architecture of data streaming – “Apache Kafka” has introduced in 2011

Here I am brining a short course for Kafka where try to provide a basic understanding of Kafka with it’s core architecture and some hands-on on the producer consumer code.

So let’s get started 😊

What is Kafka?

Apache Kafka was originated at LinkedIn and later became an open-sourced Apache project in 2011,  then a first-class Apache project in 2012. Kafka is written in Scala and Java.

Apache Kafka is a publisher-subscriber concept based on a fault-tolerant messaging system. It is fast, scalable, and distributed by design.

“Kafka is an Event Streaming architecture.”

Event streaming is capturing data in real-time from various event sources like databases, cloud services, software applications, etc.

Why Kafka?

Kafka is a messaging system. This is typically suits for the application that requires high throughput and low latency. It can be used for real-time analytics.

Kafka can work with Flume/Flafka, Spark Streaming, Storm, HBase, Flink, and Spark for real-time ingesting, analysis and processing of streaming data. Kafka is a data stream used to feed Hadoop BigData lakes. Kafka brokers support massive message streams for a low-latency follow-up analysis in Hadoop or Spark.

Basics of Kafka:

Apache.org states that:

  • Kafka runs as a cluster on one or more servers.
  • The Kafka cluster stores a stream of records in categories called topics.
  • Each record consists of a key, a value, and a timestamp.

Key Concepts :

Events and Offset :

Kafka uses Log data structure to store the Event/Messages. Each message/Event has a unique Key. Kafka ensures that the message should not be duplicate and must be in sequence.

Offsets are the pointers to understand from where data needs to be picked.

Events/Messages can stay in the partition for very long period and even forever.

Topic and Partitions :

Topic is a uniquely defined category in which producer publishes messages.

Each topic contain one or many partitions. Partitions contains messages.

Messages are written to topics and kafka uses round robin to selects which partition to write the message to.

To make sure that some particular type of messages should go to same partition we can assign Key to the messages, attaching a key to messages will ensure messages with the same key always go to the same partition in a topic. Kafka guarantees order within a partition, but not across partitions in a topic.

Cluster and Broker :

Kafka cluster can have multiple brokers inside it, to maintain load balancing. A single Kafka server is called as Kafka broker. Kafka cluster is stateless hence to maintain cluster state Kafka uses Zookeeper.

I’ll cover zookeeper in the next point. For now let’s understand what is broker.

Broker receives messages from producer and assign offset to it and then store it on local disk.

Broker is also responsible to serve message fetch request coming from consumer.

Each broker contains one or more Topics. Each topic along with their partitions can be assigned to multiple broker but the owner or leader will be only one.

For example in the below diagram Partition 0 is replicated along with topic X in Broker 1 and Broker 2, but the leader will always be only one. The replica is used as a backup of partition. So that if any particular broker fails then the replicator takes leadership.

Producer and consumer only connects to the Leader partition.

Zookeeper:

Kafka uses Zookeeper to maintain and coordinate between brokers.

Zookeeper is also sends notification to the Producer and consumer about the presence of any new broker or if any new leader created. So that according to that they can make decision and start coordinating  the task accordingly.

Consumer Group:

A consumer group is a platform where we can have multiple consumers. Each consumer group has one unique Id.

Only one consume in the group can pull the messages from a particular partition. Same consumer group can not have multiple consumers of same partition.  

Multiple consumers can consume messages from same partition but they must be from different consumer groups.

If the consumers are more in same group and partitions are less then there are changes to have some inactive consumers in the group.

Summery:

Kafka is an event based messaging system. Mostly suited for applications where big amount of real time data needs to be processed.

In the complete architecture of Kafka it provides load balancing, data backup, maintain message order, facility to read messages from a particular position, message storage for longer period, message can be fetched by multiple consumers of different groups.

Next -> Kafka: Install and Run Apache Kafka on windows