Bhubaneswar, Odisha, India
+91-8328865778
support@softchief.com

Tag: linux

Working with Postgres SQL in macOS

Install / upgrade Postgresql smartechie-macos :~ $ brew postgresql Get installation details smartechie-macos :~ $ brew info postgres Result postgresql: stable 10.4 (bottled), HEAD Object-relational database system https://www.postgresql.org/ Conflicts with: postgres-xc (because postgresql and postgres-xc install the same binaries.) /usr/local/Cellar/postgresql/9.6.3 (3,260 files, 36.6MB) Poured from bottle on 2017-06-05 at 20:47:39 /usr/local/Cellar/postgresql/10.4 (3,389 files, 39.2MB) *…
Read more

Solved : Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try

Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.27.10.191:50010,DS-51d68378-35de-4c70-b27e-7a98a53919cc,DISK], DatanodeInfoWithStorage[172.27.10.223:50010,DS-f172c682-713d-4a8f-b8af-69198ddc6756,DISK]], original=[DatanodeInfoWithStorage[172.27.10.191:50010,DS-51d68378-35de-4c70-b27e-7a98a53919cc,DISK], DatanodeInfoWithStorage[172.27.10.223:50010,DS-f172c682-713d-4a8f-b8af-69198ddc6756,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via ‘dfs.client.block.write.replace-datanode-on-failure.policy’ in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:925) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:988) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1156) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)…
Read more

Create Superuser in HUE

[ec2-user@ip-123-45-67-890 ~]$ ls -ltr /usr/lib/hue/build/env/bin/hue -rwxr-xr-x 1 root root 523 Sep 22 22:09 /usr/lib/hue/build/env/bin/hue [ec2-user@ip-123-45-67-890 ~]$ sudo /usr/lib/hue/build/env/bin/hue createsuperuser Username (leave blank to use ‘root’): sudhir Email address: mail2sudhir.online@gmail.com Password: Password (again): Superuser created successfully. [ec2-user@ip-123-45-67-890 ~]$

ERROR when writing file to S3 bucket from EMRFS enabled Spark cluster

ERROR : 18/03/02 01:42:17 INFO RetryInvocationHandler: Exception while invoking ConsistencyCheckerS3FileSystem.mkdirs over null. Retrying after sleeping for 10000ms. com.amazon.ws.emr.hadoop.fs.consistency.exception.ConsistencyException: Directory ‘bucket/folder/_temporary’ present in the metadata but not s3 at com.amazon.ws.emr.hadoop.fs.consistency.ConsistencyCheckerS3FileSystem.getFileStatus(ConsistencyCheckerS3FileSystem.java:506)   Root cause : Mostly the consistent problem comes due to Manual deletion of files and directory from S3 console retry logic in spark and hadoop…
Read more

Amazon Aurora MySql Commands line

Connecting to a Database on a DB Instance Running the MySQL Database Engine Once Amazon RDS provisions your DB instance, you can use any standard SQL client application to connect to a database on the DB instance. In this example, you connect to a database on a MySQL DB instance using MySQL monitor commands. One…
Read more

Exception when creating hive table from hdfs parquet file

Problem FAILED: SemanticException Cannot find class ‘parquet.hive.DeprecatedParquetInputFormat’ Solution [hadoop@ip-123-45-67-890 extjars]$mkdir extjars [hadoop@ip-123-45-67-890 extjars]$cd extjars/ Now Download required jars: [hadoop@ip-123-45-67-890 extjars]$for f in parquet-avro parquet-cascading parquet-column parquet-common parquet-encoding parquet-generator parquet-hadoop parquet-hive parquet-pig parquet-scrooge parquet-test-hadoop2 parquet-thrift do curl -O https://oss.sonatype.org/service/local/repositories/releases/content/com/twitter/${f}/1.2.4/${f}-1.2.4.jar done curl -O https://oss.sonatype.org/service/local/repositories/releases/content/com/twitter/parquet-format/1.0.0/parquet-format-1.0.0.jar   [hadoop@ip-123-45-67-890 extjars]$ ls -ltr total 5472 -rw-rw-r– 1 hadoop hadoop 891821 Dec…
Read more

Solved: Hive work directory creation issue

  Exception: smartechie:~ sudhir.pradhan$ hive hiveSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/Cellar/hive/2.3.1/libexec/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/usr/local/Cellar/hadoop/2.8.0/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Logging initialized using configuration in jar:file:/usr/local/Cellar/hive/2.3.1/libexec/lib/hive-common-2.3.1.jar!/hive-log4j2.properties Async: trueException in thread “main” java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D at org.apache.hadoop.fs.Path.initialize(Path.java:254) at org.apache.hadoop.fs.Path.<init>(Path.java:212) at…
Read more

Unzip Multiple Files from Linux Command Line

Problem : [hadoop@spradhan]$ unzip *.zip Archive: a.csv.zip caution: filename not matched: b.csv.zip caution: filename not matched: c.csv.zip caution: filename not matched: d.csv.zip caution: filename not matched: e.csv.zip Solution : [hadoop@spradhan]$ unzip ‘*.zip’ If you run in background, [hadoop@spradhan]$ nohup unzip ‘*.zip’ &