Black Friday Biggest Discount Flat 70% Offer - Ends in 0d 00h 00m 00s - Coupon code: 70diswrap

Hitachi HCE-5920 Dumps

Page: 1 / 6
Total 60 questions

Hitachi Vantara Certified Specialist - Pentaho Data Integration Implementation Questions and Answers

Question 1

A transformation is running in a production environment and you want to monitor it in real time.

Which tool should you use?

Options:

A.

Pentaho Operations Mart

B.

Kettle status page

C.

Log4j

D.

Monitoring tab

Question 2

You have completed a successful installation of a Pentaho server on Linux.

You now need to write a script to run the Pentaho server as a service.

Which two files should you call from the script? (Choose two.)

Choose 2 answers

Options:

A.

start-pentaho.sh

B.

start-pentaho-debug.

C.

import-export.sh

D.

stop-pentaho.sh

Question 3

Which statement is true for a transformation?

Options:

A.

Transformation steps are executed in parallel.

B.

Atransformation must have a start step.

C.

A transformation step processes one row at a time.

D.

A transformation step can have only one incoming hop and one outgoing hop.

Question 4

You have a string field in your dataset where you need to extract characters 1-5 only.

Which two steps will accomplish this task? (Choose two.)

Choose 2 answers

Options:

A.

the Strings Cut step

B.

the Siring Operations step

C.

the Split Fields stop

D.

The Modified Java Script Value step

Question 5

You need to load data from many CSV files into a database and you want to minimize the number of PDI jobs and transformations that need to be maintained.

In which two scenarios is Metadata injection the recommend option? (Choose two.)

Choose 2 answers

Options:

A.

When the files have a different structure andhave different target tables.

B.

When the files have a different structure and have the same target table.

C.

When the files have the same structure and have different target tables.

D.

When the files have the same structure and have the same target table.

Question 6

You need to process data on the nodes within a Hadoop cluster. To accomplish this task, you write a mapper and reducer transformation and use the Pentaho MapReduce entry to execute the MapReduce job on the cluster.

In this scenario, which two steps are required within the transformations? (Choose two.)

Choose 2 answers

Options:

A.

the Madoop Fie Input step

B.

the Hadoop File Output step

C.

the MapReduce Input step

D.

the MapReduce Output step

Question 7

You have a PDI input step that generates data within a transformation.

Which two statements are true about downstream steps in this scenario? (Choose two.)

Choose 2 answers

Options:

A.

The steps will receive a stream of data from the input as soon as it is a available.

B.

Only one step can receive data from the input step.

C.

The steps will receive the data once the input step fully fetches it.

D.

Multiple steps can receive data from the input step.

Question 8

You have multiple transformations that read and process data from multiple text files. You identity a series of steps that are common across transformations and you want to re-use them to avoid duplication of code.

How doyou accomplish this?

Options:

A.

Use the "Mapping (sub-transformation)' step containing the series of steps.

B.

Use the ETL Metadata Infection' stop containing the series of steps.

C.

Use the "Get data from XML' step to read the series of steps

D.

Use the 'Job Executor1 step containing the series of steps

Question 9

A Big Data customer wants to run POI transformations on Spark on their production Hadoop cluster using Pentaho's Adaptive Execution Layer (AEL)

What are two steps for installing AEL? (Choose two.)

Choose 2 answers

Options:

A.

Run the Spark application butter tool to obtain the AEL daemon zip file.

B.

Configure the AEL daemon in Local Mode.

C.

Run the AEL Oozie job to install the AEL daemon.

D.

Configure the AEL daemon in YARN Mode

Page: 1 / 6
Total 60 questions