- 浏览: 236012 次
- 性别:
- 来自: 成都
文章分类
- 全部博客 (294)
- Hadoop (34)
- mysql (9)
- operatingsystem (13)
- Hive (8)
- Hue (8)
- Pig (11)
- oozie (8)
- ZooKeeper (1)
- HBase (1)
- Spark (4)
- Impala (1)
- Lily (0)
- Solr (41)
- RS (0)
- Sqoop (10)
- Avro (0)
- Thrift (0)
- HDP (0)
- Bigtop (0)
- Redis (7)
- Java (6)
- Tez (6)
- Ambari (1)
- Mahout (25)
- MongoDB (9)
- Lucene (9)
- Nutch (1)
- Katta (1)
- UIMA (0)
- MediaProcess (1)
- linux (1)
- Design (2)
- AI (1)
- RTR (1)
- Docker (1)
- Splunk (0)
- OpenNLP (1)
- Carrot (3)
- LingPipe (0)
- Weka (0)
- Hama (9)
- CloudStack (0)
- Helix (0)
- Rave (0)
- jclouds (0)
- Giraph (0)
- Drill (0)
- Tajo (0)
- Kafka (3)
- Samza (0)
- Storm (23)
- Flume (15)
- Sifarish (1)
- ML (3)
- android (2)
- Theory (2)
- 系统架构 (2)
- Kiji (1)
- Neo4j (16)
- spanner (0)
- Ejabberd (0)
- Dropwizard (1)
- Tigon (1)
- OrientDB (1)
- Kite (2)
- Jubatus (4)
- Logstash (2)
- Kibana (0)
- Cassandra (0)
- Curator (1)
最新评论
-
oldrat:
https://github.com/oldratlee/tr ...
Kafka: High Qulity Posts
1. when put local file to HDFS using #hadoop fs -put in.txt /test, there is a error message:
hdfs.DFSClient: Exception in createBlockOutputStream java.net.NoReouteToHostException
solution: shutdown firewall in all nodes(non-secure mode, if run in secure mode, you should maual configure the firewall rules).
centos :# service iptables save
#service iptables stop
#chkconfig iptables off
ubuntu 12.04 : #sudo ufw disable
fedora20:
#sudo systemctl status firewalld.service
#sudo systemctl stop firewalld.service
#sudo systemctl disable firewalld.service
2. Could Only Be Replicated To .
see:http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo
3.when using sqoop to import data from mysql to hdfs,
sqoop>start job -j 3
there result is faild. Due to the following errors:
2014-03-20 02:31:14,695 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
Cannot assign container Container: [ContainerId:
container_1395399417464_0002_01_000012, NodeId: f2.zhj:40543,
NodeHttpAddress: f2.zhj:8042, Resource: ,
Priority: 20, Token: Token { kind: ContainerToken,
service: 192.168.122.3:40543 }, ] for a map as either container memory
less than required 1024 or no pending map tasks - maps.isEmpty=true
2014-03-20 02:33:49,930 WARN [CommitterEvent Processor #2]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter:
Could not delete hdfs://192.168.122.1:2014/test/actor/_temporary/1
/_temporary/attempt_1395399417464_0002_m_000002
the above errors are all removed by changing all /etc/hosts in all nodes. You should comment all lines start with 127.0.0.1 and 127.0.1.1
发表评论
-
Content based and collaborative filtering based recommendation and personalizati
2015-01-21 15:53 619... -
sqoop: truncate table prior export data from hdfs
2015-01-06 17:18 2803We are using Sqoop to expor ... -
Real-time Clickstream Analytics using Flume, Avro, Kite Morphlines and Impala
2014-12-30 14:16 556http://techkites.blogspot.com/2 ... -
Data ETL tools for hadoop ecosystem Morphlines
2014-12-25 11:39 1316when i use there is a err ... -
open replicator
2014-12-22 20:35 539http://blog.csdn.net/men ... -
data replication from different databases
2014-12-22 10:22 552tungsten-replicator-3.0.0-524-s ... -
In-Memory Hadoop Accelerator
2014-12-19 15:02 535https://gridgaintech.wordpress ... -
flume source using mysql-replication-listener to realtime copy data from mysql
2014-12-18 11:46 682https://bitbucket.org/winebar ... -
Is HDFS an append only file system? Then, how do people modify the files stored
2014-12-17 17:22 1058HDFS is append only, yes. The ... -
mysql hadoop applier install and configure
2014-12-11 17:36 22101.install and configure hadoo ... -
MySQL Applier For Hadoop: Implementation
2014-12-08 17:15 1285http://innovating-technology.b ... -
MySQL Applier For Hadoop: Real time data export from MySQL to HDFS
2014-12-08 17:00 2379http://innovating-technology.b ... -
mysql applier with hadoop
2014-12-08 11:25 873MySQL Applier for Hadoop Rep ... -
The Hadoop Ecosystem Table
2014-11-10 15:28 442http://hadoopecosystemtable.g ... -
Hadoop: How to using two mapper to do different thing
2014-10-10 10:30 686In my work, I run a situation ... -
Hadoop:Integrating Hadoop Data with Oracle Parallel Processing
2014-10-09 16:52 327... -
Hadoop: Output data to mutiple dir
2014-09-01 12:47 539import java.io.IOException; ... -
hadoop: data join exception
2014-08-26 18:39 523http://stackoverflow.com/questi ... -
Add third party jars in a job
2014-08-18 15:10 1306When I submit a java job (inc ... -
Hadoop: High Qulity Blog
2014-07-01 15:01 431http://www.cnblogs.com/zha ...
相关推荐
Hadoop 2.2.0 配置文件 在4台CentOs 6.4版本下运行成功
自己配了一遍hadoop2.2.0,写给新手,绝对详细,后面还有一个配好了的测试文档,在我的资源里找
Hadoop 2.2.0版本中在64为linux系统上运行所需要的native库文件。需要时用此native文件夹覆盖Hadoop 2.2.0中native文件夹即可。
Hadoop2.2.0安装配置手册,新手安装和配置
hadoop2.2.0/2.6.0/2.7.0/2.7.1 64位安装包。
Hadoop官网上下载的hadoop-2.2.0安装包是32位的,直接运行在64位的linux系统上会有问题,所以需要自己将hadoop-2.2.0安装包编译为64位。
资源名称:CentOS 6.5 x64下安装19实体节点Hadoop 2.2.0集群配置指南内容简介: CentOS 6.5 x64下安装19实体节点Hadoop 2.2.0集群配置指南主要讲述的是CentOS 6.5 x64下安装19实体节点Hadoop 2.2.0集群配置指南;...
hadoop2.2.0 eclipse插件-重新编译过。hadoop用的是hadoop2.2.0版本,eclipse用的是 eclipse-kepler。 插件 eclipse-kepler
在CentSO_64bit集群搭建,hadoop2.2(64位)编译 新版亮点: 基于yarn计算框架和高可用性DFS的第一个稳定版本。 注1:官网只提供32位release版本, 若机器为64位,需要手动编译。 环境配置是个挺烦人的活,麻烦不说还...
此hadoop是hadoop-2.2.0是32位的编译出来的,亲测可用
Hadoop2.2.0完全分布式集群平台安装设置 HDFS HA架构: 1、先设定电脑的IP为静态地址: 2、设置各个主机的hostname 3、在所有电脑的/etc/hosts添加以下配置: 4、设置SSH无密码登陆 5、下载解压hadoop-2.2.0.tar.gz...
Hadoop2.2.0安装配置手册!完全分布式Hadoop集群搭建过程 按照文档中的操作步骤,一步步操作就可以完全实现hadoop2.2.0版本的完全分布式集群搭建过程
hadoop2.2.0安装指南
Hadoop 2.2.0 分布式安装指导
64位的hadoop安装包和源码包
用于windows下eclipse连接hadoop2.2.0的插件以及eclipse版本,亲测可用
Hadoop2.2.0 CHM 英文 API 如果打不开,右键解除锁定即可查阅
hadoop2.2.0集群搭建
本文档主要详细介绍了Hadoop 2.2.0版本的编译安装过程。