Bhubaneswar, Odisha, India
+91-8328865778
support@softchief.com

Tag: aws

Tips & Tricks – Spark Streaming and Amazon S3

As we all know, the Amazon S3 is an amazing storage to deal with persisting the hot and cold data in this big-data era. It has 99.99% uptime which has been claimed by amazon. You can follow the documentation from amazon for more details. When we all agreeing upon the S3 storage is easiest, resilient…
Read more

Solved : Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try

Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.27.10.191:50010,DS-51d68378-35de-4c70-b27e-7a98a53919cc,DISK], DatanodeInfoWithStorage[172.27.10.223:50010,DS-f172c682-713d-4a8f-b8af-69198ddc6756,DISK]], original=[DatanodeInfoWithStorage[172.27.10.191:50010,DS-51d68378-35de-4c70-b27e-7a98a53919cc,DISK], DatanodeInfoWithStorage[172.27.10.223:50010,DS-f172c682-713d-4a8f-b8af-69198ddc6756,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via ‘dfs.client.block.write.replace-datanode-on-failure.policy’ in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:925) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:988) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1156) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)…
Read more

Amazon Aurora MySql Commands line

Connecting to a Database on a DB Instance Running the MySQL Database Engine Once Amazon RDS provisions your DB instance, you can use any standard SQL client application to connect to a database on the DB instance. In this example, you connect to a database on a MySQL DB instance using MySQL monitor commands. One…
Read more

Create and Insert to Hive table example

Create table : hive> CREATE TABLE students (name VARCHAR(64), age INT, gpa DECIMAL(3, 2)); OK Time taken: 1.084 seconds List tables : hive> show tables; OK students values__tmp__table__1 Time taken: 0.023 seconds, Fetched: 2 row(s) Describe table : hive> describe students; OK name                 varchar(64) age    …
Read more

Create and Insert to HBase table example

Login into master node : [ec2-user@ip-123-45-67-89 ~]$ sudo hbase shell HBase Shell; enter ‘help<RETURN>’ for list of supported commands. Type “exit<RETURN>” to leave the HBase Shell Version 1.3.1, rUnknown, Fri Sep 22 21:28:57 UTC 2017 hbase(main):001:0> list tables Create table : Command stntax create ‘<table_name>’, ‘<column_family>’ Example hbase(main):004:0> create ’employee_hbase’, ‘cf1’ Insert data into above…
Read more

Where is emrfs-site.xml ?

The emrfs-site.xml is being create if the EMRFS is enabled when creating the EMR in AWS. You can manage other related configurations by logging into the master node and in the following location, [ec2-user@ip-123-45-67-89 ~]$ ls -ltr /usr/share/aws/emr/emrfs/conf/emrfs-site.xml -rw-r–r– 1 root root 609 Feb 6 21:59 /usr/share/aws/emr/emrfs/conf/emrfs-site.xml [ec2-user@ip-123-45-67-89 ~]$   [ec2-user@ip-123-45-67-89 ~]$ cat /usr/share/aws/emr/emrfs/conf/emrfs-site.xml <?xml version=”1.0″?>…
Read more