Error: Multilib version problems found. This often means that the root cause is something else and multilib version checking is just pointing out that there is a problem. Eg.: 1. You have an upgrade for libxml2 which is missing some dependency that another package requires. Yum is trying to solve this by installing an older version of libxml2 of the different architecture. If you exclude the bad architecture yum will tell you what the root cause is (which package requires what). You can try redoing the upgrade with --exclude libxml2.otherarch ... this should give you an error message showing the root cause of the problem. 2. You have multiple architectures of libxml2 installed, but yum can only see an upgrade for one of those architectures. If you don't want/need both architectures anymore then you can remove the one with the missing update and everything will work. 3. You have duplicate versions of libxml2 installed already. You can use "yum check" to get yum show these errors. ...you can also use --setopt=protected_multilib=false to remove this checking, however this is almost never the correct thing to do as something else is very likely to go wrong (often causing much more problems). Protected multilib versions: libxml2-2.9.1-6.el7.5.i686 != libxml2-2.9.1-6.el7_9.6.x86_64
$ yum install libxslt-devel There was a problem importing one of the Python modules required to run yum. The error leading to this problem was:
libxml2.so.2: cannot open shared object file: No such file or directory
Please install a package which provides this module, or verify that the module is installed correctly.
It's possible that the above module doesn't match the current version of Python, which is: 2.7.5 (default, Oct 14 2020, 14:45:30) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
If you cannot solve this problem yourself, please go to the yum faq at: http://yum.baseurl.org/wiki/Faq
Last login: Thu Jul 28 16:19:10 CST 2022 -bash-4.2$ hdfs dfs -ls / 2022-07-28 16:21:11 INFO org.apache.hadoop.io.retry.RetryInvocationHandler: java.net.ConnectException: Call From pdh01/10.20.210.41 to ds04:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused, while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over ds04/10.20.210.48:8020 after 1 failover attempts. Trying to failover after sleeping for 774ms. 2022-07-28 16:21:12 INFO org.apache.hadoop.io.retry.RetryInvocationHandler: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1952) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1423) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3085) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1154) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678) , while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over ds03/10.20.210.47:8020 after 2 failover attempts. Trying to failover after sleeping for 2547ms.
t=059d12bacdd017f2def577eaf51f7550&r=e9a0e54c7c3be8820eddd6d10f3d92f9&s=f34874a662ff4d4f038621498f0cd33f0923dd5e535a63f7b3d967de99f07395 2022-07-27 08:29:58 [AsyncTask] Task Failed: [HDFS] 为 pdh01 HDFS 添加 /tmp,/user 目录 TaskInfo: [ hostname: pdh01, ipv4: 10.20.210.41, ipv6: null, name: ChmodHDFS777Task, desc: [HDFS] 为 pdh01 HDFS 添加 /tmp,/user 目录, exec: chmod-hdfs.sh, timeout: null, args: [/user, /tmp, /proya-base/, /proya-base/tmp, /proya-base/user, 2.0.0.0], interactArgs: null, state: FAILED, skippable: false ] null 2022-07-27 08:29:58 [AsyncTask] org.springframework.web.client.ResourceAccessException: I/O error on POST request for"http://pdh01:8001/api/v1/udp/agent/exec": Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:751) at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:677) at org.springframework.web.client.RestTemplate.postForObject(RestTemplate.java:421) at cn.ucloud.udp.async.task.impl.service.hdfs.ChmodHDFS777Task.execute(ChmodHDFS777Task.java:42) at cn.ucloud.udp.async.task.AbstractTask.run(AbstractTask.java:206) at cn.ucloud.udp.async.task.AbstractTask.call(AbstractTask.java:192) at cn.ucloud.udp.async.task.AbstractTask.call(AbstractTask.java:68) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:171) at java.net.SocketInputStream.read(SocketInputStream.java:141) at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157) at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at org.springframework.http.client.HttpComponentsStreamingClientHttpRequest.executeInternal(HttpCompone
16/06/27 13:41:11 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=node1:2181,node2:2181,node3:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@125a6d70 16/06/27 13:41:11 INFO zookeeper.ClientCnxn: Opening socket connection to server node1/192.168.245.11:2181. Will not attempt to authenticate using SASL (unknown error) 16/06/27 13:41:11 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 16/06/27 13:41:11 INFO zookeeper.ClientCnxn: Opening socket connection to server node2/192.168.245.12:2181. Will not attempt to authenticate using SASL (unknown error) 16/06/27 13:41:11 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 16/06/27 13:41:11 INFO zookeeper.ClientCnxn: Opening socket connection to server node3/192.168.245.13:2181. Will not attempt to authenticate using SASL (unknown error) 16/06/27 13:41:11 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[root@ds02 zookeeper]# su - hadoop Last login: Thu Jul 28 14:28:34 CST 2022 -bash-4.2$ hdfs zkfc -formatZK 2022-07-28 14:29:08 INFO org.apache.hadoop.hdfs.tools.DFSZKFailoverController: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DFSZKFailoverController STARTUP_MSG: host = ds02/10.20.210.46 STARTUP_MSG: args = [-formatZK] STARTUP_MSG: version = 3.1.1 STARTUP_MSG: classpath = /opt/usdp-srv/srv/udp/2.0.0.0/hdfs/etc/hadoop:/opt/usdp-srv/srv/udp/2.0.0.0/hdfs/share/hadoop/common/lib/accessors-smart-1.2.jar:/opt/usdp-srv/srv/udp/2.0.0.0/hdfs/share/ha 。。。。 。。。。 STARTUP_MSG: build = Unknown -r Unknown; compiled by 'hadoop' on 2020-11-15T04:36Z STARTUP_MSG: java = 1.8.0_202 ************************************************************/ 2022-07-28 14:29:09 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session connected. =============================================== The configured parent znode /hadoop-ha/proya-base already exists. Are you sure you want to clear all failover information from ZooKeeper? WARNING: Before proceeding, ensure that all HDFS services and failover controllers are stopped! =============================================== Proceed formatting /hadoop-ha/proya-base? (Y or N) y 2022-07-28 14:29:12 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down for session: 0x30004a09f880001 2022-07-28 14:29:12 INFO org.apache.hadoop.hdfs.tools.DFSZKFailoverController: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DFSZKFailoverController at ds02/10.20.210.46 ************************************************************/
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000ce000000, 838860800, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 838860800 bytes for committing reserved memory. # An error report file with more information is saved as: # /opt/usdp-srv/usdp/bin/hs_err_pid459521.log
好吧,没有内存:
1 2 3 4 5
[root@ds02 zookeeper]# free -m total used free shared buff/cache available Mem: 7821 5710 1872 8 238 1872 Swap: 0 0 0 [root@ds02 zookeeper]#
Traceback (most recent call last): File "/usr/local/bin/pssh", line 118, in <module> do_pssh(hosts, cmdline, opts) File "/usr/local/bin/pssh", line 71, in do_pssh manager = Manager(opts) File "/usr/local/lib/python3.6/site-packages/psshlib/manager.py", line 42, in __init__ self.iomap = IOMap() File "/usr/local/lib/python3.6/site-packages/psshlib/manager.py", line 215, in __init__ signal.set_wakeup_fd(wakeup_writefd) ValueError: the fd 4 must be in non-blocking mode [WARN] Could not import version package in /usr/local/lib/python3.6/site-packages/psshlib/cli.py