Are you over 18 and want to see adult content?
More Annotations
![A complete backup of prtimes.jp/main/html/rd/p/000000218.000009215.html](https://www.archivebay.com/archive2/8743cd54-e717-4d42-a857-17d7039a7ccd.png)
A complete backup of prtimes.jp/main/html/rd/p/000000218.000009215.html
Are you over 18 and want to see adult content?
![A complete backup of www.firstpost.com/sports/serie-a-inter-milan-fight-back-from-0-2-down-to-beat-ac-milan-in-derby-lazio-win-t](https://www.archivebay.com/archive2/8e7d294c-d90b-4a06-8802-c06260de9be0.png)
A complete backup of www.firstpost.com/sports/serie-a-inter-milan-fight-back-from-0-2-down-to-beat-ac-milan-in-derby-lazio-win-t
Are you over 18 and want to see adult content?
Favourite Annotations
!['LLERO | The Lifestyle & News Platform for Latino Men](https://www.archivebay.com/archive/1b889536-01f0-4254-88c0-c8d3fdbbef0a.png)
'LLERO | The Lifestyle & News Platform for Latino Men
Are you over 18 and want to see adult content?
![Browsec VPN your Personal Privacy and Security Online](https://www.archivebay.com/archive/152e83d5-09d6-40af-8e1a-8b1abc01c6f2.png)
Browsec VPN your Personal Privacy and Security Online
Are you over 18 and want to see adult content?
![PERATURAN PAJAK – PUSAT INFORMASI SEMUA PERATURAN PAJAK YANG PERNAH TERBIT DI INDONESIA](https://www.archivebay.com/archive/9679f880-7ee6-4c6a-9afe-b33f48023e6f.png)
PERATURAN PAJAK – PUSAT INFORMASI SEMUA PERATURAN PAJAK YANG PERNAH TERBIT DI INDONESIA
Are you over 18 and want to see adult content?
![Rajasthan Tour Package, Travel, Tourism, Trip, Holiday Tours Packages](https://www.archivebay.com/archive/9081f3c2-f948-415d-8906-cbd22648d74c.png)
Rajasthan Tour Package, Travel, Tourism, Trip, Holiday Tours Packages
Are you over 18 and want to see adult content?
![Mediabay.uz - Онлайн ТВ, фильмы, сериалы и многое другое](https://www.archivebay.com/archive/4e494206-0077-44e1-b502-34dda034eb37.png)
Mediabay.uz - Онлайн ТВ, фильмы, сериалы и многое другое
Are you over 18 and want to see adult content?
![Nature Bring( Urban Gardening, Organic Gardening, and much more)](https://www.archivebay.com/archive/9b7c10ef-8b6f-4a54-9daa-23d7223dd8b7.png)
Nature Bring( Urban Gardening, Organic Gardening, and much more)
Are you over 18 and want to see adult content?
Text
with key.
APACHE SPARK COMBINEBYKEY EXAMPLE Spark combineByKey RDD transformation is very similar to combiner in Hadoop MapReduce programming. In this post, we’ll discuss spark combineByKey example in depth and try to understand the importance of this function in detail. Spark combineByKey is a transformation operation on PairRDD (i.e. RDD with key/value pair). It is a wider operation as it requires shuffle in the last stage. UNDERSTANDING APACHE SPARK ARCHITECTURE Apache Spark architecture enables to write computation application which are almost 10x faster than traditional Hadoop MapReuce applications. We have already discussed about features of Apache Spark in the introductory post.. Apache Spark doesn’t provide any storage (like HDFS) or any Resource Management capabilities. YARN ARCHITECTURE AND COMPONENTS We have discussed a high level view of YARN Architecture in my post on Understanding Hadoop 2.x Architecture but YARN it self is a wider subject to understand. Keeping that in mind, we’ll about discuss YARN Architecture, it’s components and advantages in this post. APACHE SPARK REDUCE EXAMPLE Apache Spark reduce example. In above image you can see that are doing cumulative sum of numbers from 1 to 10 using reduce function. Here reduce method accepts a function (accum, n) => (accum + n). This function initialize accum variable with default integer value 0, adds up an element every when reduce method is called and returns finalvalue
APACHE SPARK FILTER EXAMPLE Apache Spark filter Example. As you can see in above image RDD X is the source RDD and contains elements 1 to 5 and has two partitions. Operation filter is take predicate f (x) as an argument which is some thing like x % 2 == 0 it means it will return true for even elements and false for odd elements. RDD Y is a resulting RDD which will have APACHE SPARK AGGREGATEBYKEY EXAMPLE aggregateByKey function in Spark accepts total 3 parameters, Initial value or Zero value. It can be 0 if aggregation is type of sum of all values. We have have this value as Double.MaxValue if aggregation objective is to find minimum value. We can also use Double.MinValue value if aggregation objective is to find maximum value. UNDERSTANDING HADOOP2 ARCHITECTURE AND IT'S DAEMONS It has some new Java APIs and features in HDFS and MapReduce which are known as HDFS2 and MR2 respectively. New architecture has added the architectural features like HDFS High Availability and HDFS Federation. Hadoop 2.x not using Job Tracker and Task Tracker daemons for resource management now on-wards, it is using YARN (Yet AnotherResource
7 STEPS TO INSTALL APACHE HIVE WITH HADOOP ON CENTOS Before we learn to install Apache Hive on CentOS let me give you the introduction of it. Hive is basically a data warehouse tool to store and process the structured data residing on HDFS. Hive was developed by Facebook and than after it is shifted to Apache Software Foundation and became an open source Apache Hive. APACHE SPARK GROUPBYKEY EXAMPLE Spark RDD groupByKey function collects the values for each key in a form of an iterator. As name suggest groupByKey function in Apache Spark just groups all values with respect to a single key. Unlike reduceByKey it doesn’t per form any operation on final output. It just group the data and returns in a form of an iterator. APACHE SPARK REDUCEBYKEY EXAMPLE Apache Spark reduceByKey Example. In above image you can see that RDD X has set of multiple paired elements like (a,1) and (b,1) with 3 partitions. It accepts a function (accum, n) => (accum + n) which initialize accum variable with default integer value 0, adds up an element for each key and returns final RDD Y with total counts pairedwith key.
APACHE SPARK COMBINEBYKEY EXAMPLE Spark combineByKey RDD transformation is very similar to combiner in Hadoop MapReduce programming. In this post, we’ll discuss spark combineByKey example in depth and try to understand the importance of this function in detail. Spark combineByKey is a transformation operation on PairRDD (i.e. RDD with key/value pair). It is a wider operation as it requires shuffle in the last stage. UNDERSTANDING APACHE SPARK ARCHITECTURE Apache Spark architecture enables to write computation application which are almost 10x faster than traditional Hadoop MapReuce applications. We have already discussed about features of Apache Spark in the introductory post.. Apache Spark doesn’t provide any storage (like HDFS) or any Resource Management capabilities. YARN ARCHITECTURE AND COMPONENTS We have discussed a high level view of YARN Architecture in my post on Understanding Hadoop 2.x Architecture but YARN it self is a wider subject to understand. Keeping that in mind, we’ll about discuss YARN Architecture, it’s components and advantages in this post. APACHE SPARK REDUCE EXAMPLE Apache Spark reduce example. In above image you can see that are doing cumulative sum of numbers from 1 to 10 using reduce function. Here reduce method accepts a function (accum, n) => (accum + n). This function initialize accum variable with default integer value 0, adds up an element every when reduce method is called and returns finalvalue
APACHE SPARK FILTER EXAMPLE Apache Spark filter Example. As you can see in above image RDD X is the source RDD and contains elements 1 to 5 and has two partitions. Operation filter is take predicate f (x) as an argument which is some thing like x % 2 == 0 it means it will return true for even elements and false for odd elements. RDD Y is a resulting RDD which will have APACHE SPARK AGGREGATEBYKEY EXAMPLE aggregateByKey function in Spark accepts total 3 parameters, Initial value or Zero value. It can be 0 if aggregation is type of sum of all values. We have have this value as Double.MaxValue if aggregation objective is to find minimum value. We can also use Double.MinValue value if aggregation objective is to find maximum value. UNDERSTANDING HADOOP2 ARCHITECTURE AND IT'S DAEMONS It has some new Java APIs and features in HDFS and MapReduce which are known as HDFS2 and MR2 respectively. New architecture has added the architectural features like HDFS High Availability and HDFS Federation. Hadoop 2.x not using Job Tracker and Task Tracker daemons for resource management now on-wards, it is using YARN (Yet AnotherResource
7 STEPS TO INSTALL APACHE HIVE WITH HADOOP ON CENTOS Before we learn to install Apache Hive on CentOS let me give you the introduction of it. Hive is basically a data warehouse tool to store and process the structured data residing on HDFS. Hive was developed by Facebook and than after it is shifted to Apache Software Foundation and became an open source Apache Hive. APACHE SPARK FILTER EXAMPLE Apache Spark filter Example. As you can see in above image RDD X is the source RDD and contains elements 1 to 5 and has two partitions. Operation filter is take predicate f (x) as an argument which is some thing like x % 2 == 0 it means it will return true for even elements and false for odd elements. RDD Y is a resulting RDD which will have CREATING NUMPY ARRAY FOR BEGINNERS Creating numpy array from python list or nested lists. You can create numpy array casting python list. Simply pass the python list to np.array () method as an argument and you are done. This will return 1D numpy array or a vector. In case you want to create 2D numpy array or a matrix, simply pass python list of list to np.array () method. APACHE SPARK AGGREGATEBYKEY EXAMPLE aggregateByKey function in Spark accepts total 3 parameters, Initial value or Zero value. It can be 0 if aggregation is type of sum of all values. We have have this value as Double.MaxValue if aggregation objective is to find minimum value. We can also use Double.MinValue value if aggregation objective is to find maximum value. UNDERSTANDING HADOOP2 ARCHITECTURE AND IT'S DAEMONS Prior to learn the concepts of Hadoop 2.x Architecture, I strongly recommend you to refer the my post on Hadoop Core Components, internals of Hadoop 1.x Architecture and its limitations. It will give you the idea about Hadoop2 Architecture requirement. HOW TO ACCESS FILES FROM HDFS? If you don’t have that, follow all the steps given in my post Setup Multi Node Hadoop 2.6.0 Cluster with YARN. Below are the list of commands for accessing HDFS. ## Creates file under root directory $ hdfs dfs -mkdir /user ## -p is used for creating multiple child directories $ hdfs dfs -mkdir -p /user/root/backtobazics ## Createsfile under
SIMPLE EXPLANATION OF HADOOP CORE COMPONENTS : HDFS AND Before this post we have discussed about what is Hadoop and what kind of issues are solved by Hadoop.Now Let’s deep dive in to various components of Hadoop. Hadoop as a whole distribution provides only two core components and HDFS (which is Hadoop Distributed File System) and MapReduce (which is a distributed batch processing framework). SCALA EXCEPTION HANDLING BY CODE EXAMPLES Important rules for Scala exception handling. In Scala, all exceptions are unchecked exceptions (no Checked Exception like Java) Exception catch block uses pattern matching to handle exceptions; If you have declared finally block, it will always be executed whether exception is thrown from try block or not; In order to handle the exception, you must have try block followed by catch and/or HOW TO EXECUTE SCALA SCRIPT ON WINDOWS AND UNIX When you write any .bat scala script, just make sure that you write first 5 lines of above script first and than start your scripting.. Executing message.bat file from windows command prompt you will get “Hello, Welcome to Scala Script..!!!!!” as an output. In above 5 lines, call scala command is responsible for executing your scala script where as %0 and %* are parameters BUILDING SPARK APPLICATION JAR USING SCALA AND SBT Normally we create Spark Application JAR using Scala and SBT (Scala Building Tool). In my previous post on Creating Multi-node Spark Cluster we have executed a work count example using spark shell. As an extension to that, we'll learn about How to create Spark Application JAR file with Scala and SBT and How to execute it as a Spark Job onSpark Cluster.
RESHAPING NUMPY ARRAY In python, reshaping numpy array can be very critical while creating a matrix or tensor from vectors. In order to reshape numpy array of one dimension to n dimensions one can use np.reshape() method. Let’s check out some simple examples. Skip to content Menu* About Me
* Contact Me
* Privacy Policy
* Tutorials
* Scala Tutorial for Java ProgrammersBACK TO BAZICS
BE EMPOWERED BY KNOWING THE BASICSMenu
* Home
* Scala Tutorial
* Contact Me
November 28, 2018November 28, 2018by Varun
USING PANDAS DESCRIBE METHOD TO GET DATAFRAME SUMMARY Data Analysts often use pandas describe method to get high level summary from dataframe. Pandas describe method plays a very critical role to understand data distribution of each column. Continue reading “Using pandas describe method to get dataframesummary” →
Python Data Wrangling, pandas
, pandas dataframe
, pandas describe
, python
Leave a comment
November 26, 2018November 24, 2018by Varun
HOW TO SORT PANDAS DATAFRAME | SORTING PANDAS DATAFRAMES In this post, we will mainly focus on all features related to sort pandas dataframe. Pandas is a highly used library in python for data analysis. Mainly because of its enriched set of functionalities. Continue reading “How to sort pandas dataframe | Sorting pandasdataframes” →
Python Data Wrangling, pandas
, pandas dataframe
, pandas sort
, pandas sort dataframe, python
Leave a comment
November 25, 2018November 24, 2018by Varun
PANDAS SERIES BASIC UNDERSTANDING | FIRST STEP TOWARDS DATA ANALYSIS Pandas series is a single dimensional numpy array with labels. Pandas series can hold data with any datatype (i.e. integer, string, float, datetime, etc.). The labels of this numpy array are called indexes which also can be of any datatype. Continue reading “Pandas series Basic Understanding | First step towards data analysis” → Python Data Wrangling, pandas
, pandas series
, python
Leave a comment
November 24, 2018November 24, 2018by Varun
HOW TO DROP COLUMNS AND ROWS IN PANDAS DATAFRAME This post describes different ways of dropping columns of rows from pandas dataframe. While performing any data analysis task you often need to remove certain columns or entire rows which are not relevant. So let’s learn how to remove columns or rows using pandas dropfunction.
Continue reading “How to drop columns and rows in pandasdataframe” →
Python Data Science
, Data Wrangling
, pandas
, pandas drop columns, pandas drop rows
, pandas.drop()
, pandas.dropna()
, python
Leave a comment
November 19, 2018November 24, 2018by Varun
PANDAS TIME SERIES DATA MANIPULATION Pandas time series data manipulation is a must have skill for any data analyst/engineer. More than 70% of the world’s structured data is time series data. And pandas library in python provides powerful functions/APIs for time series data manipulation. So let’s learn the basics of data wrangling using pandas time series APIs. Continue reading “Pandas Time Series Data Manipulation” → Python Data Wrangling, pandas
, pandas dataframe
, pandas time series, python
Leave a comment
October 17, 2018November 24, 2018by Varun
PANDAS READ CSV FILE | LOADING CSV WITH PANDAS READ_CSV In any data science/data analysis work, the first step is to read CSV file (with pandas library). Pandas _read_csv_ function is popular to load any CSV file in pandas. In this post we’ll explore various options of pandas _read_csv_ function. Continue reading “Pandas Read CSV file | Loading CSV with pandasread_csv” →
Python pandas
, python
Leave a comment
September 25, 2018September 26, 2018by Varun
APACHE SPARK COMBINEBYKEY EXAMPLE Spark combineByKey RDD transformation is very similar to combiner in Hadoop MapReduce programming. In this post, we’ll discuss spark combineByKey example in depth and try to understand the importance of this function in detail. Continue reading “Apache Spark combineByKeyExample” →
Big Data , Spark
Spark RDD
, Transformations
Leave a comment
August 27, 2018August 27, 2018by Varun
TRANSPOSING NUMPY ARRAY Transposing numpy array is extremely simple using np.transpose function. Fundamentally, transposing numpy array only make sense when you have array of 2 or more than 2 dimensions. Continue reading “Transposing NumPy array” →Python numpy
, python
Leave a comment
August 22, 2018August 22, 2018by Varun
CREATING PANDAS DATAFRAME FROM LISTS AND DICTIONARY OBJECTS In post, we’ll learn to create pandas dataframe from python lists and dictionary objects. Creating pandas dataframe is fairly simple and basic step for Data Analysis. There are also other ways to create dataframe (i.e. from csv, excel files or even from databases queries). But we’ll cover other steps in other posts. Continue reading “Creating Pandas Dataframe from Lists and Dictionary Objects” →Python pandas
Leave a comment
August 21, 2018August 21, 2018by Varun
RESHAPING NUMPY ARRAY | NUMPY ARRAY RESHAPE EXAMPLES In python, reshaping numpy array can be very critical while creating a matrix or tensor from vectors. In order to reshape numpy array of one dimension to n dimensions one can use np.reshape() method. Let’s check out some simple examples. Continue reading “Reshaping NumPy Array | Numpy Array ReshapeExamples” →
Python numpy
, python
Leave a comment
POSTS NAVIGATION
Older posts
Search for:
ABOUT ME
Blogger, Learner, Technology Specialist in Big Data, Data Analytics, Machine Learning, Deep Learning, Natural Language ProcessingRead More
RECENT POSTS
* Using pandas describe method to get dataframe summary * How to sort pandas dataframe | Sorting pandas dataframes * Pandas series Basic Understanding | First step towards dataanalysis
* How to drop columns and rows in pandas dataframe * Pandas Time Series Data ManipulationTAGS
Actions Array
Big Data
CentOS
Classes
Cluster Setup
Collections
Control Structure
Data Types
Data Wrangling
Debugging
Do While Loop
For Loop
FQDN
Functional ProgrammingHadoop
HDFS
Hive
If Else
Immutable List
Installation
Java
Java 8
Keyless SSH
Lambda Expressions
MapReduce
numpy
Objects
pandas
pandas dataframe
Passwordless SSH
Pattern Matching
python
Scala
Singleton
Spark
Spark RDD
SSH
Stackable ModificationsStatic IP
Traits
Transformations
Tutorial
While Loop
YARN
Search for:
* About Me
* Contact Me
* Privacy Policy
RECENT POSTS
* Using pandas describe method to get dataframe summary * How to sort pandas dataframe | Sorting pandas dataframes * Pandas series Basic Understanding | First step towards dataanalysis
* How to drop columns and rows in pandas dataframe * Pandas Time Series Data Manipulation 2018 Back To Bazics | The content is copyrighted and may not be reproduced on other websites.Details
Copyright © 2024 ArchiveBay.com. All rights reserved. Terms of Use | Privacy Policy | DMCA | 2021 | Feedback | Advertising | RSS 2.0