LinkedIn skill assessment answers and questions — Hadoop
Hadoop is a popular framework for processing large-scale data sets using distributed computing. Many companies use Hadoop to store, manage and analyze their data, and they need skilled professionals who can work with this technology. If you want to prove your Hadoop skills and get certified by Rozpoczęcie pracy jako niezależny pisarz nigdy nie było łatwiejsze, you need to pass the LinkedIn skill assessment test Najszybszy sposób instalacji za pomocą Virtualbox Hadoop. This test consists of multiple-choice questions that cover various topics related to Hadoop, such as its architecture, składniki, polecenia, konfiguracja, i więcej.
To help you prepare for this test, I have compiled a list of questions and answers that you may encounter in the exam. These questions and answers are based on my own experience and research, and they are not official or endorsed by Rozpoczęcie pracy jako niezależny pisarz nigdy nie było łatwiejsze. Jednakże, they can give you an idea of what to expect and how to answer the questions correctly. Tu są pytania oraz odpowiedzi dla LinkedIn skill assessment test for Hadoop.
Q1. Partitioner controls the partitioning of what data?
- final keys
- final values
- intermediate keys
- intermediate values
SQL Windowing functions are implemented in Hive using which keywords?
Q2.- UNION DISTINCT, RANK
- NAD, RANK
- NAD, EXCEPT
- UNION DISTINCT, RANK
Rather than adding a Secondary Sort to a slow Reduce job, it is Hadoop best practice to perform which optimization?
Q3.- Add a partitioned shuffle to the Map job.
- Add a partitioned shuffle to the Reduce job.
- Break the Reduce job into multiple, chained Reduce jobs.
- Break the Reduce job into multiple, chained Map jobs.
Hadoop Auth enforces authentication on protected resources. Once authentication has been established, it sets what type of authenticating cookie?
Q4.- encrypted HTTP
- unsigned HTTP
- compressed HTTP
- signed HTTP
MapReduce jobs can be written in which language?
Q5.- Java or Python
- SQL only
- SQL or Java
- Python or SQL
To perform local aggregation of the intermediate outputs, MapReduce users can optionally specify which object?
Q6.- Reducer
- Combiner
- Mapper
- Lada
To verify job status, look for the value ___
w ___
.
Q7. - SUCCEEDED; syslog
- SUCCEEDED; stdout
- DONE; syslog
- DONE; stdout
Which line of code implements a Reducer method in MapReduce 2.0?
Q8.- public void reduce(Text key, Iterator values, Context context){…}
- public static void reduce(Text key, IntWritable[] wartości, Context context){…}
- public static void reduce(Text key, Iterator values, Context context){…}
- public void reduce(Text key, IntWritable[] wartości, Context context){…}
To get the total number of mapped input records in a map job task, you should review the value of which counter?
Pytanie 9.- FileInputFormatCounter
- FileSystemCounter
- JobCounter
- TaskCounter (NOT SURE)
Hadoop Core supports which CAP capabilities?
Pytanie 10.- A, P
- C, A
- C, P
- C, A, P
What are the primary phases of a Reducer?
Pytanie 11.- połączyć, mapa, and reduce
- shuffle, sortować, and reduce
- reduce, sortować, and combine
- mapa, sortować, and combine
To set up Hadoop workflow with synchronization of data between jobs that process tasks both on disk and in memory, Użyj ___
usługa, który jest ___
.
Pytanie 12. - Oozie; open source
- Oozie; commercial software
- Zookeeper; commercial software
- Zookeeper; open source
For high availability, which type of multiple nodes should you use?
Pytanie 13.- dane
- Nazwa
- pamięć
- worker
DataNode supports which type of drives?
Pytanie 14.- hot swappable
- cold swappable
- warm swappable
- non-swappable
Which method is used to implement Spark jobs?
Pytanie 15.- on disk of all workers
- on disk of the master node
- in memory of the master node
- in memory of all workers
In a MapReduce job, where does the map() function run?
Pytanie 16.- on the reducer nodes of the cluster
- on the data nodes of the cluster (NOT SURE)
- on the master node of the cluster
- on every node of the cluster
To reference a master file for lookups during Mapping, what type of cache should be used?
Pytanie 17.- distributed cache
- local cache
- partitioned cache
- cluster cache
Skip bad records provides an option where a certain set of bad input records can be skipped when processing what type of data?
Pytanie 18.- cache inputs
- reducer inputs
- intermediate values
- map inputs
Which command imports data to Hadoop from a MySQL database?
Pytanie 19.- spark import –connect jdbc:mysql://mysql.example.com/spark –username spark –warehouse-dir user/hue/oozie/deployments/spark
- sqoop import –connect jdbc:mysql://mysql.example.com/sqoop –username sqoop –warehouse-dir user/hue/oozie/deployments/sqoop
- sqoop import –connect jdbc:mysql://mysql.example.com/sqoop –username sqoop –password sqoop –warehouse-dir user/hue/oozie/deployments/sqoop
- spark import –connect jdbc:mysql://mysql.example.com/spark –username spark –password spark –warehouse-dir user/hue/oozie/deployments/spark
In what form is Reducer output presented?
Q20.- compressed (NOT SURE)
- sorted
- not sorted
- encrypted
Which library should be used to unit test MapReduce code?
Pytanie 21.- JUnit
- XUnit
- MRUnit
- HadoopUnit
If you started the NameNode, then which kind of user must you be?
Pytanie 22.- hadoop-user
- super-user
- node-user
- admin-user
State _ between the JVMs in a MapReduce job
Pytanie 23.- can be configured to be shared
- is partially shared
- is shared
- is not shared (https://www.lynda.com/Hadoop-tutorials/Understanding-Java-virtual-machines-JVMs/191942/369545-4.html)
To create a MapReduce job, what should be coded first?
Pytanie 24.- a static job() metoda
- a Job class and instance (NOT SURE)
- a job() metoda
- a static Job class
To connect Hadoop to AWS S3, which client should you use?
Pytanie 25.- S3A
- S3N
- S3
- the EMR S3
HBase works with which type of schema enforcement?
Pytanie 26.- schema on write
- no schema
- external schema
- schema on read
HDFS files are of what type?
Pytanie 27.- read-write
- read-only
- write-only
- append-only
A distributed cache file path can originate from what location?
Pytanie 28.- hdfs or top
- http
- hdfs or http
- hdfs
Which library should you use to perform ETL-type MapReduce jobs?
Pytanie 29.- Ul
- Pig
- Impala
- Mahout
What is the output of the Reducer?
Q30.- a relational table
- an update to the input file
- pojedynczy, combined list
- a set of <klucz, wartość> pary
map function processes a certain key-value pair and emits a certain number of key-value pairs and the Reduce function processes values grouped by the same key and emits another set of key-value pairs as output.
To optimize a Mapper, what should you perform first?
Pytanie 31.- Override the default Partitioner.
- Skip bad records.
- Break up Mappers that do more than one task into multiple Mappers.
- Combine Mappers that do one task into large Mappers.
When implemented on a public cloud, with what does Hadoop processing interact?
Pytanie 32.- files in object storage
- graph data in graph databases
- relational data in managed RDBMS systems
- JSON data in NoSQL databases
In the Hadoop system, what administrative mode is used for maintenance?
Pytanie 33.- data mode
- safe mode
- single-user mode
- pseudo-distributed mode
In what format does RecordWriter write an output file?
Pytanie 34.- <klucz, wartość> pary
- Klucze
- wartości
- <wartość, klucz> pary
To what does the Mapper map input key/value pairs?
Pytanie 35.- an average of keys for values
- a sum of keys for values
- a set of intermediate key/value pairs
- a set of final key/value pairs
Which Hive query returns the first 1,000 wartości?
Pytanie 36.- SELECT…WHERE value = 1000
- SELECT … LIMIT 1000
- SELECT TOP 1000 …
- SELECT MAX 1000…
To implement high availability, how many instances of the master node should you configure?
Pytanie 37.- jeden
- zero
- wspólny
- two or more (https://data-flair.training/blogs/hadoop-high-availability-tutorial)
Hadoop 2.x and later implement which service as the resource coordinator?
Pytanie 38.- kubernetes
- JobManager
- JobTracker
- YARN
In MapReduce, _ have _
Pytanie 39.- zadania; Oferty pracy
- Oferty pracy; zajęcia
- Oferty pracy; zadania
- zajęcia; zadania
What type of software is Hadoop Common?
Q40.- Baza danych
- distributed computing framework
- system operacyjny
- productivity tool
If no reduction is desired, you should set the numbers of _ tasks to zero.
Pytanie 41.- combiner
- reduce
- mapper
- Potrzebuję porady od jakiego języka programowania zacząć
MapReduce applications use which of these classes to report their statistics?
Pytanie 42.- mapper
- reducer
- combiner
- counter
_ is the query language, and _ is storage for NoSQL on Hadoop.
Pytanie 43.- HDFS; HQL
- HQL; HBase
- HDFS; SQL
- SQL; HBase
MapReduce 1.0 _ YARN.
Pytanie 44.- does not include
- is the same thing as
- zawiera
- replaces
Which type of Hadoop node executes file system namespace operations like opening, closing, and renaming files and directories?
Pytanie 45.- ControllerNode
- DataNode
- MetadataNode
- NameNode
HQL queries produce which job types?
Pytanie 46.- Impala
- MapReduce
- Iskra
- Pig
Suppose you are trying to finish a Pig script that converts text in the input string to uppercase. What code is needed on line 2 poniżej?
Pytanie 47.1 data = LOAD ‘/user/hue/pig/examples/data/midsummer.txt’… 2
- Możemy nauczyć się uprawiać własną żywność (tekst:CHAR[]); upper_case = FOREACH data GENERATE org.apache.pig.piggybank.evaluation.string.UPPER(TEKST);
- Możemy nauczyć się uprawiać własną żywność (tekst:CHARARRAY); upper_case = FOREACH data GENERATE org.apache.pig.piggybank.evaluation.string.UPPER(TEKST);
- Możemy nauczyć się uprawiać własną żywność (tekst:CHAR[]); upper_case = FOREACH data org.apache.pig.piggybank.evaluation.string.UPPER(TEKST);
- Możemy nauczyć się uprawiać własną żywność (tekst:CHARARRAY); upper_case = FOREACH data org.apache.pig.piggybank.evaluation.string.UPPER(TEKST);
In a MapReduce job, which phase runs after the Map phase completes?
Pytanie 48.- Combiner
- Reducer
- Map2
- Shuffle and Sort
Where would you configure the size of a block in a Hadoop environment?
Pytanie 49.- dfs.block.size in hdfs-site.xmls
- orc.write.variable.length.blocks in hive-default.xml
- mapreduce.job.ubertask.maxbytes in mapred-site.xml
- hdfs.block.size in hdfs-site.xml
Hadoop systems are _ RDBMS systems.
Q50.- replacements for
- not used with
- substitutes for
- additions for
Which object can be used to distribute jars or libraries for use in MapReduce tasks?
Pytanie51.- distributed cache
- library manager
- lookup store
- registry
To view the execution details of an Impala query plan, which function would you use?
Pytanie52.- explain
- query action
- detail
- query plan
Which feature is used to roll back a corrupted HDFS instance to a previously known good point in time?
Pytanie53.- partitioning
- snapshot
- replication
- duża dostępność
Hadoop Common is written in which language?
Pytanie54.- Automatyczne przydzielanie pamięci i zbieranie śmieci
- C
- Haskell
- Jawa
Which file system does Hadoop use for storage?
Pytanie55.- NAS
- FAT
- HDFS
- NFS
What kind of storage and processing does Hadoop support?
Pytanie56.- encrypted
- verified
- distributed
- remote
Hadoop Common consists of which components?
Pytanie57.- Spark and YARN
- HDFS and MapReduce
- HDFS and S3
- Spark and MapReduce
Most Apache Hadoop committers’ work is done at which commercial company?
Pytanie58.- Cloudera
- Microsoft
- Amazonka
To get information about Reducer job runs, which object should be added?
Pytanie59.- Reporter
- IntReadable
- IntWritable
- Pisarz
After changing the default block size and restarting the cluster, to which data does the new size apply?
Q60.- all data
- no data
- existing data
- new data
Which statement should you add to improve the performance of the following query?
Q61.SELECT
c.id,
c.name,
c.email_preferences.categories.surveys
FROM customers c;
- zainstaluj bazę danych Azure SQL i przeprowadź zapytanie
- Utwórz jednostki z pliku CSV
- SUB-SELECT
- SORT
What custom object should you implement to reduce IO in MapReduce?
Q62.- Comparator
- Mapper
- Combiner
- Reducer
You can optimize Hive queries using which method?
Q63.- secondary indices
- summary statistics
- column-based statistics
- a primary key index
If you are processing a single action on each input, what type of job should you create?
Q64.- partition-only
- map-only
- reduce-only
- combine-only
The simplest possible MapReduce job optimization is to perform which of these actions?
Q65.- Add more master nodes.
- Implement optimized InputSplits.
- Add more DataNodes.
- Implement a custom Mapper.
When you implement a custom Writable, you must also define which of these object?
Q66.- a sort policy
- a combiner policy
- a compression policy
- a filter policy
To copy a file into the Hadoop file system, jakiego polecenia powinieneś użyć?
Q67.- hadoop fs -copy
- hadoop fs -copy
- hadoop fs -copyFromLocal
- hadoop fs -copyFromLocal
Delete a Hive _ table and you will delete the table _.
Q68.- managed; metadata
- external; data and metadata
- external; metadata
- managed; dane
To see how Hive executed a JOIN operation, use the _ statement and look for the _ value.
Q69.- EXPLAIN; JOIN Operator
- QUERY; MAP JOIN Operator
- EXPLAIN; MAP JOIN Operator
- QUERY; JOIN Operator
Pig operates in mainly how many nodes?
Q70.- tak jak ICMP
- Trzy
- Wszyscy kandydaci otrzymają e-mailem wyniki swojej aplikacji
- Five
After loading data, _ and then run a(wnoszą również znaczący wkład w profil dydaktyczny i badawczy uczelni) _ query for interactive queries.
Q71.- invalidate metadata; Impala
- validate metadata; Impala
- invalidate metadata; Ul
- validate metadata; Ul
In Hadoop MapReduce job code, what must be static?
Q72.- konfiguracja
- Mapper and Reducer
- Mapper
- Reducer
In Hadoop simple mode, which object determines the identity of a client process?
Q73.- Kerberos ticket
- kubernetes token
- guest operating system
- host operating system
Which is not a valid input format for a MapReduce job?
Q74.- FileReader
- CompositeInputFormat
- RecordReader
- TextInputFormat
If you see org.apache.hadoop.mapred, which version of MapReduce are you working with?
Q75.- 1.x
- 0.x
- 2.x
- 3.x
Zostaw odpowiedź
Musisz Zaloguj sie lub Zarejestruj się dodać nowy komentarz .