7 Replies Latest reply on Apr 22, 2021 12:39 PM by user186817

    How To Run Mapping using Blaze runtime connect to Cloudera 5.x

    Niaga Prima Seasoned Veteran

      Hi Support,

       

      Currently, we have issue when running mapping using blaze runtime.

      Informatica version 10.4.1.3
      CDH version 5.16

      based on information from our vendor, CDH 5.16 is not support for timelineserver.

      what the work around for running mapping using blaze?

      and can we running mapping runtime blaze without blaze monitor and grid manager?

      any simple step to run mapping with blaze without another config in CDH? because when running mapping with spark its successful.

       

      Thanks,

      Suyanto

        • 1. Re: How To Run Mapping using Blaze runtime connect to Cloudera 5.x
          user165569 Guru

          Hi Suyanto,

           

          You could run the mappings in Blaze mode in DEI 10.4.1.3 version and in our latest versions we start our own ATS at the cluster side. Hence there is no dependency for Blaze with CDH ATS.

           

          Are you facing any issues with Blaze mappings?

           

          Thanks,
          Ninju

          Informatica Support

          • 2. Re: How To Run Mapping using Blaze runtime connect to Cloudera 5.x
            Niaga Prima Seasoned Veteran

            Hi All,

             

            Yes I'm Facing issue when run mapping on blaze mode.

             

            here log mapping :

             

            2021-04-20 16:40:35.995 <LdtmWorkflowTask-pool-1-thread-10> SEVERE: The Integration Service failed to run the task [MAINSESSION_task1]. See the additional error messages for more information.

            java.util.concurrent.CompletionException: [com.informatica.sdk.dtm.ExecutionException: [GRIDDTM_1011] The Integration Service failed to execute grid mapping.]

            [[CAL_API_1] The Integration Service encountered an unexpected error condition: [java.lang.RuntimeException: Connection timeout; application [Blaze_Grid_Manager_Service] running on host:port [n25.bigdata.bri.co.id:12496] is still initializing..

                at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry.checkCADIInitialization(CADIRuntimeRegistry.java:659)

                at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry.validateGridMgrService(CADIRuntimeRegistry.java:560)

                at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry.startCADIGridManager(CADIRuntimeRegistry.java:479)

                at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry.startCADIServices(CADIRuntimeRegistry.java:367)

                at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry.access$1(CADIRuntimeRegistry.java:318)

                at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry$StartCADIHandler.run(CADIRuntimeRegistry.java:829)

                at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry$StartCADIHandler.run(CADIRuntimeRegistry.java:1)

                at java.security.AccessController.doPrivileged(Native Method)

                at javax.security.auth.Subject.doAs(Subject.java:422)

                at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)

                at com.informatica.platform.dtm.executor.hadoop.impl.AbstractIUserGroupInformationImpl.doAs(AbstractIUserGroupInformationImpl.java:122)

                at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry.initCADIServices(CADIRuntimeRegistry.java:151)

                at com.informatica.dtm.executor.grid.cal.yarn.client.YarnClusterServicesCnxFactory.initCADIServices(YarnClusterServicesCnxFactory.java:78)

                at com.informatica.platform.dtm.executor.grid.GridExecutor.initCADIServices(GridExecutor.java:1000)

                at com.informatica.platform.dtm.executor.grid.GridExecutor.preProcessing(GridExecutor.java:413)

                at com.informatica.platform.dtm.executor.grid.GridExecutor.runAsync(GridExecutor.java:275)

                at com.informatica.platform.dtm.executor.grid.task.impl.GridMappingTaskHandlerImpl.executeMainScriptAsync(GridMappingTaskHandlerImpl.java:149)

                at com.informatica.executor.workflow.taskhandler.impl.BaseTaskHandlerImpl.startTaskAsync(BaseTaskHandlerImpl.java:234)

                at com.informatica.executor.workflow.taskhandler.impl.BaseTaskHandlerImpl.runAsync(BaseTaskHandlerImpl.java:200)

                at com.informatica.executor.workflow.taskhandler.impl.BaseTaskHandlerImpl.run(BaseTaskHandlerImpl.java:119)

                at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

                at java.util.concurrent.FutureTask.run(FutureTask.java:266)

                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                at java.lang.Thread.run(Thread.java:748)

            ].]

            2021-04-20 16:40:35.995 <LdtmWorkflowTask-pool-1-thread-10> FINE: Updating workflow state to failed.

            2021-04-20 16:40:35.995 <LdtmWorkflowTask-pool-1-thread-10> FINE:  Workflow Completed with status: FAILED

             

             

            2021-04-20 16:40:36.055 <OnDemandDTM-pool-2-thread-51> SEVERE: The Integration Service failed to execute the mapping.

            java.lang.RuntimeException: java.util.concurrent.CompletionException: com.informatica.sdk.dtm.ExecutionException: [GRIDDTM_1011] The Integration Service failed to execute grid mapping.

                at com.informatica.platform.ldtm.executor.universal.UniversalExecutor.run(UniversalExecutor.java:133)

                at com.informatica.platform.ldtm.executor.ExecutionEngine$SubmittedRunnable.run(ExecutionEngine.java:593)

                at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

                at java.util.concurrent.FutureTask.run(FutureTask.java:266)

                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                at java.lang.Thread.run(Thread.java:748)

            Caused by: java.util.concurrent.CompletionException: com.informatica.sdk.dtm.ExecutionException: [GRIDDTM_1011] The Integration Service failed to execute grid mapping.

                at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)

                at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)

                at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:769)

                at java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:778)

                at java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2140)

                at com.informatica.platform.dtm.executor.grid.task.impl.GridMappingTaskHandlerImpl.executeMainScriptAsync(GridMappingTaskHandlerImpl.java:156)

                at com.informatica.executor.workflow.taskhandler.impl.BaseTaskHandlerImpl.startTaskAsync(BaseTaskHandlerImpl.java:234)

                at com.informatica.executor.workflow.taskhandler.impl.BaseTaskHandlerImpl.runAsync(BaseTaskHandlerImpl.java:200)

                at com.informatica.executor.workflow.taskhandler.impl.BaseTaskHandlerImpl.run(BaseTaskHandlerImpl.java:119)

                ... 5 more

            Caused by: com.informatica.sdk.dtm.ExecutionException: [GRIDDTM_1011] The Integration Service failed to execute grid mapping.

                at com.informatica.platform.dtm.executor.grid.GridExecutor.runAsync(GridExecutor.java:308)

                at com.informatica.platform.dtm.executor.grid.task.impl.GridMappingTaskHandlerImpl.executeMainScriptAsync(GridMappingTaskHandlerImpl.java:149)

                ... 8 more

               

            Here log from application in yarn :

            2021-04-20 16:27:57.751    INFO: [LOG_MGM_200013] Initializing local service log directory

            2021-04-20 16:27:57.752    INFO: [LOG_MGM_20007] Node Root Log Directory is configured to [/infa/blaze]

            2021-04-20 16:27:57.796 WARNING: [LOG_MGM_20010] Node Root Log Directory [/infa/blaze] creation failed

            2021-04-20 16:27:57.796    INFO: [LOG_MGM_20004] Switching to current working directory [/data/21/yarn/nm/usercache/infabri/appcache/application_1618906996898_0031/container_e94_1618906996898_0031_01_000001] as container's local log directory

            2021-04-20 16:27:57.797    INFO: [LOG_MGM_20005] Initializing local log directories in current working directory [/data/21/yarn/nm/usercache/infabri/appcache/application_1618906996898_0031/container_e94_1618906996898_0031_01_000001]

            2021-04-20 16:27:57.798    INFO: [LOG_MGM_20015] Container local log directory [/data/21/yarn/nm/usercache/infabri/appcache/application_1618906996898_0031/container_e94_1618906996898_0031_01_000001/application_1618906996898_0031] successfully created

            2021-04-20 16:27:57.798    INFO: [LOG_MGM_20019] Service local log directory [/data/21/yarn/nm/usercache/infabri/appcache/application_1618906996898_0031/container_e94_1618906996898_0031_01_000001/application_1618906996898_0031/servicelogs] successfully created

            2021-04-20 16:27:57.808    INFO: [LOG_MGM_20025] Local log for the service [Blaze_Grid_Manager_Service] is stored at [/data/21/yarn/nm/usercache/infabri/appcache/application_1618906996898_0031/container_e94_1618906996898_0031_01_000001/application_1618906996898_0031/servicelogs/Blaze_Grid_Manager_Service.log]

            2021-04-20 16:27:57.891    INFO: [HADOOP_API_0008] The Integration Service found Informatica Hadoop distribution directory [/data/20/yarn/nm/filecache/10/infa_rpm.tar/services/shared/hadoop/CDH_5.15/lib and /conf] for Hadoop class loader.

            2021-04-20 16:27:57.892    INFO: [HADOOP_API_0008] The Integration Service found Informatica Hadoop distribution directory [/data/20/yarn/nm/filecache/10/infa_rpm.tar/services/shared/hadoop/CDH_5.15/infaLib] for Hadoop class loader.

            2021-04-20 16:27:57.892    INFO: [HADOOP_API_0005] The Integration Service created a Hadoop class loader [/data/20/yarn/nm/filecache/10/infa_rpm.tar/services/shared/hadoop/CDH_5.15].

            log4j:ERROR Could not read configuration file from URL [file:/run/cloudera-scm-agent/process/11148-yarn-NODEMANAGER/log4j.properties].

            java.io.FileNotFoundException: /run/cloudera-scm-agent/process/11148-yarn-NODEMANAGER/log4j.properties (Permission denied)

                at java.io.FileInputStream.open0(Native Method)

                at java.io.FileInputStream.open(FileInputStream.java:195)

                at java.io.FileInputStream.<init>(FileInputStream.java:138)

                at java.io.FileInputStream.<init>(FileInputStream.java:93)

                at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)

                at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)

                at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)

                at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)

                at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)

                at org.apache.log4j.Logger.getLogger(Logger.java:104)

                at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262)

                at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108)

                at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

                at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

                at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

                at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

                at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025)

                at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:844)

                at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541)

                at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292)

                at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269)

                at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)

                at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:180)

                at com.informatica.platform.dtm.executor.hadoop.yarn.impl.AbstractYarnFactoryBaseImpl.newYarnConfiguration(AbstractYarnFactoryBaseImpl.java:60)

                at com.informatica.cal.yarn.client.AbstractClientImpl.<init>(AbstractClientImpl.java:87)

                at com.informatica.cal.yarn.client.AppClusterMgrClientImpl.<init>(AppClusterMgrClientImpl.java:214)

                at com.informatica.cal.yarn.client.YarnCALFactory.newAppClusterMgrClient(YarnCALFactory.java:35)

                at com.informatica.dtm.executor.grid.cadi_grid_manager.svc.CADIGridMgrSvcManager.<init>(CADIGridMgrSvcManager.java:196)

                at com.informatica.dtm.executor.grid.cadi_grid_manager.svc.CADIGridMgrSvcStarter.execute(CADIGridMgrSvcStarter.java:64)

                at com.informatica.dtm.executor.grid.svcfw.launcher.GridProcessLauncher.launchApp(GridProcessLauncher.java:100)

                at com.informatica.dtm.executor.grid.svcfw.launcher.GridProcessLauncher.main(GridProcessLauncher.java:39)

            log4j:ERROR Ignoring configuration file [file:/run/cloudera-scm-agent/process/11148-yarn-NODEMANAGER/log4j.properties].

            log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).

            log4j:WARN Please initialize the log4j system properly.

            log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

             

            Thanks,

            Suyanto

            • 3. Re: How To Run Mapping using Blaze runtime connect to Cloudera 5.x
              Niaga Prima Seasoned Veteran

              Hi All,

               

              Yes I'm Facing issue when run mapping on blaze mode.

               

              here log mapping :

               

              2021-04-20 16:40:35.995 <LdtmWorkflowTask-pool-1-thread-10> SEVERE: The Integration Service failed to run the task [MAINSESSION_task1]. See the additional error messages for more information.

              java.util.concurrent.CompletionException: [com.informatica.sdk.dtm.ExecutionException: [GRIDDTM_1011] The Integration Service failed to execute grid mapping.]

              [[CAL_API_1] The Integration Service encountered an unexpected error condition: [java.lang.RuntimeException: Connection timeout; application [Blaze_Grid_Manager_Service] running on host:port [n25.bigdata.bri.co.id:12496] is still initializing..

                  at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry.checkCADIInitialization(CADIRuntimeRegistry.java:659)

                  at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry.validateGridMgrService(CADIRuntimeRegistry.java:560)

                  at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry.startCADIGridManager(CADIRuntimeRegistry.java:479)

                  at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry.startCADIServices(CADIRuntimeRegistry.java:367)

                  at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry.access$1(CADIRuntimeRegistry.java:318)

                  at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry$StartCADIHandler.run(CADIRuntimeRegistry.java:829)

                  at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry$StartCADIHandler.run(CADIRuntimeRegistry.java:1)

                  at java.security.AccessController.doPrivileged(Native Method)

                  at javax.security.auth.Subject.doAs(Subject.java:422)

                  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)

                  at com.informatica.platform.dtm.executor.hadoop.impl.AbstractIUserGroupInformationImpl.doAs(AbstractIUserGroupInformationImpl.java:122)

                  at com.informatica.dtm.executor.grid.cal.yarn.svc.CADIRuntimeRegistry.initCADIServices(CADIRuntimeRegistry.java:151)

                  at com.informatica.dtm.executor.grid.cal.yarn.client.YarnClusterServicesCnxFactory.initCADIServices(YarnClusterServicesCnxFactory.java:78)

                  at com.informatica.platform.dtm.executor.grid.GridExecutor.initCADIServices(GridExecutor.java:1000)

                  at com.informatica.platform.dtm.executor.grid.GridExecutor.preProcessing(GridExecutor.java:413)

                  at com.informatica.platform.dtm.executor.grid.GridExecutor.runAsync(GridExecutor.java:275)

                  at com.informatica.platform.dtm.executor.grid.task.impl.GridMappingTaskHandlerImpl.executeMainScriptAsync(GridMappingTaskHandlerImpl.java:149)

                  at com.informatica.executor.workflow.taskhandler.impl.BaseTaskHandlerImpl.startTaskAsync(BaseTaskHandlerImpl.java:234)

                  at com.informatica.executor.workflow.taskhandler.impl.BaseTaskHandlerImpl.runAsync(BaseTaskHandlerImpl.java:200)

                  at com.informatica.executor.workflow.taskhandler.impl.BaseTaskHandlerImpl.run(BaseTaskHandlerImpl.java:119)

                  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

                  at java.util.concurrent.FutureTask.run(FutureTask.java:266)

                  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                  at java.lang.Thread.run(Thread.java:748)

              ].]

              2021-04-20 16:40:35.995 <LdtmWorkflowTask-pool-1-thread-10> FINE: Updating workflow state to failed.

              2021-04-20 16:40:35.995 <LdtmWorkflowTask-pool-1-thread-10> FINE:  Workflow Completed with status: FAILED

               

               

              2021-04-20 16:40:36.055 <OnDemandDTM-pool-2-thread-51> SEVERE: The Integration Service failed to execute the mapping.

              java.lang.RuntimeException: java.util.concurrent.CompletionException: com.informatica.sdk.dtm.ExecutionException: [GRIDDTM_1011] The Integration Service failed to execute grid mapping.

                  at com.informatica.platform.ldtm.executor.universal.UniversalExecutor.run(UniversalExecutor.java:133)

                  at com.informatica.platform.ldtm.executor.ExecutionEngine$SubmittedRunnable.run(ExecutionEngine.java:593)

                  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

                  at java.util.concurrent.FutureTask.run(FutureTask.java:266)

                  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                  at java.lang.Thread.run(Thread.java:748)

              Caused by: java.util.concurrent.CompletionException: com.informatica.sdk.dtm.ExecutionException: [GRIDDTM_1011] The Integration Service failed to execute grid mapping.

                  at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)

                  at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)

                  at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:769)

                  at java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:778)

                  at java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2140)

                  at com.informatica.platform.dtm.executor.grid.task.impl.GridMappingTaskHandlerImpl.executeMainScriptAsync(GridMappingTaskHandlerImpl.java:156)

                  at com.informatica.executor.workflow.taskhandler.impl.BaseTaskHandlerImpl.startTaskAsync(BaseTaskHandlerImpl.java:234)

                  at com.informatica.executor.workflow.taskhandler.impl.BaseTaskHandlerImpl.runAsync(BaseTaskHandlerImpl.java:200)

                  at com.informatica.executor.workflow.taskhandler.impl.BaseTaskHandlerImpl.run(BaseTaskHandlerImpl.java:119)

                  ... 5 more

              Caused by: com.informatica.sdk.dtm.ExecutionException: [GRIDDTM_1011] The Integration Service failed to execute grid mapping.

                  at com.informatica.platform.dtm.executor.grid.GridExecutor.runAsync(GridExecutor.java:308)

                  at com.informatica.platform.dtm.executor.grid.task.impl.GridMappingTaskHandlerImpl.executeMainScriptAsync(GridMappingTaskHandlerImpl.java:149)

                  ... 8 more

                 

              Here log from application in yarn :

              2021-04-20 16:27:57.751    INFO: [LOG_MGM_200013] Initializing local service log directory

              2021-04-20 16:27:57.752    INFO: [LOG_MGM_20007] Node Root Log Directory is configured to [/infa/blaze]

              2021-04-20 16:27:57.796 WARNING: [LOG_MGM_20010] Node Root Log Directory [/infa/blaze] creation failed

              2021-04-20 16:27:57.796    INFO: [LOG_MGM_20004] Switching to current working directory [/data/21/yarn/nm/usercache/infabri/appcache/application_1618906996898_0031/container_e94_1618906996898_0031_01_000001] as container's local log directory

              2021-04-20 16:27:57.797    INFO: [LOG_MGM_20005] Initializing local log directories in current working directory [/data/21/yarn/nm/usercache/infabri/appcache/application_1618906996898_0031/container_e94_1618906996898_0031_01_000001]

              2021-04-20 16:27:57.798    INFO: [LOG_MGM_20015] Container local log directory [/data/21/yarn/nm/usercache/infabri/appcache/application_1618906996898_0031/container_e94_1618906996898_0031_01_000001/application_1618906996898_0031] successfully created

              2021-04-20 16:27:57.798    INFO: [LOG_MGM_20019] Service local log directory [/data/21/yarn/nm/usercache/infabri/appcache/application_1618906996898_0031/container_e94_1618906996898_0031_01_000001/application_1618906996898_0031/servicelogs] successfully created

              2021-04-20 16:27:57.808    INFO: [LOG_MGM_20025] Local log for the service [Blaze_Grid_Manager_Service] is stored at [/data/21/yarn/nm/usercache/infabri/appcache/application_1618906996898_0031/container_e94_1618906996898_0031_01_000001/application_1618906996898_0031/servicelogs/Blaze_Grid_Manager_Service.log]

              2021-04-20 16:27:57.891    INFO: [HADOOP_API_0008] The Integration Service found Informatica Hadoop distribution directory [/data/20/yarn/nm/filecache/10/infa_rpm.tar/services/shared/hadoop/CDH_5.15/lib and /conf] for Hadoop class loader.

              2021-04-20 16:27:57.892    INFO: [HADOOP_API_0008] The Integration Service found Informatica Hadoop distribution directory [/data/20/yarn/nm/filecache/10/infa_rpm.tar/services/shared/hadoop/CDH_5.15/infaLib] for Hadoop class loader.

              2021-04-20 16:27:57.892    INFO: [HADOOP_API_0005] The Integration Service created a Hadoop class loader [/data/20/yarn/nm/filecache/10/infa_rpm.tar/services/shared/hadoop/CDH_5.15].

              log4j:ERROR Could not read configuration file from URL [file:/run/cloudera-scm-agent/process/11148-yarn-NODEMANAGER/log4j.properties].

              java.io.FileNotFoundException: /run/cloudera-scm-agent/process/11148-yarn-NODEMANAGER/log4j.properties (Permission denied)

                  at java.io.FileInputStream.open0(Native Method)

                  at java.io.FileInputStream.open(FileInputStream.java:195)

                  at java.io.FileInputStream.<init>(FileInputStream.java:138)

                  at java.io.FileInputStream.<init>(FileInputStream.java:93)

                  at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)

                  at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)

                  at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)

                  at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)

                  at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)

                  at org.apache.log4j.Logger.getLogger(Logger.java:104)

                  at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262)

                  at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108)

                  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

                  at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

                  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

                  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

                  at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025)

                  at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:844)

                  at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541)

                  at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292)

                  at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269)

                  at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)

                  at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:180)

                  at com.informatica.platform.dtm.executor.hadoop.yarn.impl.AbstractYarnFactoryBaseImpl.newYarnConfiguration(AbstractYarnFactoryBaseImpl.java:60)

                  at com.informatica.cal.yarn.client.AbstractClientImpl.<init>(AbstractClientImpl.java:87)

                  at com.informatica.cal.yarn.client.AppClusterMgrClientImpl.<init>(AppClusterMgrClientImpl.java:214)

                  at com.informatica.cal.yarn.client.YarnCALFactory.newAppClusterMgrClient(YarnCALFactory.java:35)

                  at com.informatica.dtm.executor.grid.cadi_grid_manager.svc.CADIGridMgrSvcManager.<init>(CADIGridMgrSvcManager.java:196)

                  at com.informatica.dtm.executor.grid.cadi_grid_manager.svc.CADIGridMgrSvcStarter.execute(CADIGridMgrSvcStarter.java:64)

                  at com.informatica.dtm.executor.grid.svcfw.launcher.GridProcessLauncher.launchApp(GridProcessLauncher.java:100)

                  at com.informatica.dtm.executor.grid.svcfw.launcher.GridProcessLauncher.main(GridProcessLauncher.java:39)

              log4j:ERROR Ignoring configuration file [file:/run/cloudera-scm-agent/process/11148-yarn-NODEMANAGER/log4j.properties].

              log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).

              log4j:WARN Please initialize the log4j system properly.

              log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

               

              Thanks,

              Suyanto

              • 4. Re: How To Run Mapping using Blaze runtime connect to Cloudera 5.x
                Theja Karanam New Member

                Hi Suyanto,

                 

                Thank you for sharing the log information. From the shared log snippet we can see the following error which usually occurs if there is any connectivity issue between the Informatica DIS Server and the cluster nodes,

                 

                Error Snippet:

                 

                [[CAL_API_1] The Integration Service encountered an unexpected error condition: [java.lang.RuntimeException: Connection timeout; application [Blaze_Grid_Manager_Service] running on host:port [n25.bigdata.bri.co.id:12496] is still initializing..

                 

                 

                As mentioned in the above stack trace can you please check and confirm if the required ports are open between the Informatica Server machine and the Hadoop Cluster Nodes.

                 

                Thank You

                Theja

                • 5. Re: How To Run Mapping using Blaze runtime connect to Cloudera 5.x
                  Niaga Prima Seasoned Veteran

                  Hi Theja,

                   

                   

                   

                  Yes this port is open, I can Telnet that port.

                   

                   

                  Thanks,

                   

                  Suyanto

                  • 6. Re: How To Run Mapping using Blaze runtime connect to Cloudera 5.x
                    Krishnan Sreekandath Seasoned Veteran

                    Hello Suyanto,

                     

                    Can you please ensure the complete range mentioned between minimum port and maximum port in Blaze Configuration section for the Hadoop connection is open from the DIS node(s) ?

                     

                    By default its 12300 to 12600.

                     

                    Also, from the log snippet above, we are not able to determine if the rest of the Blaze infrastructure containers started too. Can you please confirm if other components like OOP Container Manager, Data Exchange Frameworks, etc. were also started ?

                     

                    Thanks,

                    Krishnan

                    • 7. Re: How To Run Mapping using Blaze runtime connect to Cloudera 5.x
                      user186817 Seasoned Veteran

                      Hi Suyanto,

                       

                      Please follow the steps in below article to collect full Blaze logs for your application:

                       

                       

                      That should help to understand better the problem you are facing.

                       

                      Regards,
                      Lluís