This topic pertains to the following supported Hadoop distributions that are co-located with SAS Cloud Analytic Services (CAS):
When Hadoop distributions are upgraded, often new directory paths are created for the new Hadoop installation. However, the SAS plug-in files are not migrated to the new directory paths by the Hadoop vendor's upgrade. Because of this, HDFS fails to start and it returns errors similar to the following:
dfs.namenode.plugins=com.sas.cas.hadoop.NameNodeService
dfs.datanode.plugins=com.sas.cas.hadoop.DataNodeService
2. After you have upgraded your co-located Hadoop, redo the steps appropriate for your Hadoop distribution.
Configure the Existing Apache Hadoop Cluster to Interoperate with the CAS Server
Configure the Existing Cloudera Hadoop Cluster to Interoperate with the CAS Server
Configure the Existing Hortonworks Data Platform Hadoop Cluster to Interoperate with the CAS Server
Performing these steps ensures that the SAS executable files and SAS JAR files are placed in the correct upgraded Hadoop paths and the configuration properties are in place to enable them.
If you did not disable or remove the properties described in step 1, and you encountered the HDFS start-up failure, you can recover by following the steps to copy the SAS files into your Hadoop distribution and retrying your Hadoop upgrade.