Tuesday, May 1, 2018

Python API to create Hue users/group using kerberos authentication

In this post i am going to show how to automate create/modify/delete users/groups in Cloudera Hue using python script.

Hue already simplified creating users and groups by sync ldap users and sync groups. But still in our client place we need to create hue users and groups manually since hdfs group mapping configured with ShellBasedUnixGroupsMapping and all the AD groups are created with Windows format.









Friday, April 6, 2018

Mapping Kerberos Principals to Short Names

if your unix login name is different from  kerberos principle in Active directory, you might be end up with different behavior.

For example, you have unix account with the name  devadm  and the kerberos principle for  this account configured in Active directory was sv_ou_devadm_test@TANU.COM (since many organization follow their standard)

if you run any mapreduce jobs, hadoop will always use kerberos principle and create all the files and directory in hdfs storage with  sv_ou_devadm_test ownership.

And also it will try to set the yarn container logs with principle name and then it will fail with below error.

 at java.lang.Thread.run(Thread.java:745)
2018-04-05 10:34:01,945 ERROR org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat: Error aggregating log file. Log file : /var/log/hadoop-yarn/container/application_1522780258140_0003/container_1522780258140_0003_01_000001/syslog. Owner 'devadm' for path /var/log/hadoop-yarn/container/application_1522780258140_0003/container_1522780258140_0003_01_000001/syslog did not match expected owner 'sv_ou_devadm_test'
java.io.IOException: Owner 'devadm' for path /var/log/hadoop-yarn/container/application_1522780258140_0003/container_1522780258140_0003_01_000001/syslog did not match expected owner 'sv_ou_devadm_test'
        at org.apache.hadoop.io.SecureIOUtils.checkStat(SecureIOUtils.java:284)


Update below rule in hadoop.security.auth_to_local 

RULE:[1:$1](sv_ou_.*_test)s/sv_ou_(.*)_test/$1/g
RULE:[2:$1](sv_ou_.*_test)s/sv_ou_(.*)_test/$1/g
DEFAULT

For Reference: 
https://www.cloudera.com/documentation/enterprise/5-8-x/topics/cdh_sg_kerbprin_to_sn.html 




Example Rules

Suppose all of your service principals are either of the form App.service-name/fully.qualified.domain.name@YOUR-REALM.COM or App.service-name@YOUR-REALM.COM, and you want to map these to the short name string service-name. To do this, your rule set would be:
<property>
  <name>hadoop.security.auth_to_local</name>
  <value>
 RULE:[1:$1](App\..*)s/App\.(.*)/$1/g
  RULE:[2:$1](App\..*)s/App\.(.*)/$1/g
  DEFAULT
  </value>
</property>
The first $1 in each rule is a reference to the first component of the full principal name, and the second $1 is a regular expression back-reference to text that is matched by (.*).
In the following example, suppose your company's naming scheme for user accounts in Active Directory is FirstnameLastname (for example, JohnDoe), but user home directories in HDFS are /user/firstnamelastname. The following rule set converts user accounts in the CORP.EXAMPLE.COM domain to lowercase.
<property>
  <name>hadoop.security.auth_to_local</name>
  <value>RULE:[1:$1@$0](.*@\QCORP.EXAMPLE.COM\E$)s/@\QCORP.EXAMPLE.COM\E$///L
RULE:[2:$1@$0](.*@\QCORP.EXAMPLE.COM\E$)s/@\QCORP.EXAMPLE.COM\E$///L
DEFAULT</value>
</property>
In this example, the JohnDoe@CORP.EXAMPLE.COM principal becomes the johndoe HDFS user.

Default Rule

You can specify an optional default rule called DEFAULT (see example above). The default rule reduces a principal name down to its first component only. For example, the default rule reduces the principal names atm@YOUR-REALM.COM or atm/fully.qualified.domain.name@YOUR-REALM.COM down to atm, assuming that the default domain is YOUR-REALM.COM.
The default rule applies only if the principal is in the default realm.
If a principal name does not match any of the specified rules, the mapping for that principal name will fail.

Testing Mapping Rules

You can test mapping rules for a long principal name by running:
$ hadoop org.apache.hadoop.security.HadoopKerberosName name1 name2 name3

Friday, March 16, 2018

Python script to Upload files into AWS S3 bucket


Run this script with below arguments

./aws-s3upload.py    S3_BUCKET_NAME SOURCE_FILE  S3_TARGET_DIR

Example ./aws-s3upload.py  test_s3_bucket /app/file/tanu.jpg  Images/data

above command will upload the files into test_s3_bucket/Images/data/tanu.jpg



#!/usr/bin/env python
import boto.ec2
import sys
import os
import ntpath

#### Configuration section ####
IAM_ID = 'PLACE IAM ID HERE'
IAM_SECRET ='PLACE IAM SECRET HERE'
REGION = 'us-east-1'


conn = boto.s3.connect_to_region(REGION, aws_access_key_id=IAM_ID, aws_secret_access_key=IAM_SECRET)

if (len(sys.argv) != 4 ):
        print "USAGE ./aws-s3upload.py S3_BUCKET_NAME FileName TARGETDIR"
        sys.exit(1)

s3_bucket = sys.argv[1]
filename = sys.argv[2]
targetpath = sys.argv[3]

def percent_cb(complete, total):
    sys.stdout.write('.')
    sys.stdout.flush()


try:
        print 'Uploading %s to Amazon S3 bucket %s' % (filename, s3_bucket)
        bucket = conn.get_bucket(s3_bucket)
        file=ntpath.basename(filename)
        full_key_name = os.path.join(targetpath, file)
        print("Target Upload Location " + full_key_name)
        k = bucket.new_key(full_key_name)
        k.set_contents_from_filename(filename,cb=percent_cb, num_cb=10)
except Exception,e:
    print str(e)
    print "error"

Thursday, January 25, 2018

cloudera solr mapreduce indexer failed to index

After we configured TLS to entire cluster, we are unable to run existing solr indexing using search-mr-job.jar and Morphlines file, started getting below ssl exception


hadoop --config /etc/hadoop/conf jar /cs/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/jars/search-mr-1.0.0-cdh5.12.1-job.jar org.apache.solr.hadoop.MapReduceIndexerTool -D 'mapred.child.java.opts=-Xmx500m' --log4j log4j.properties --morphline-file morph.conf --output-dir hdfs://test.tanu.com:8020/user/solr/cloud-search/atc_co/outdir --verbose --go-live --zk-host test.tanu.com:2181/solr --collection test_collection hdfs://test.tanu.com:8020/user/hive/warehouse/test.db/analyst_ticker_coverage


org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: https://test.tanu.com:8985/solr
        at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:636)
        at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:229)
        at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:225)
        at org.apache.solr.client.solrj.request.CoreAdminRequest.process(CoreAdminRequest.java:567)
        at org.apache.solr.hadoop.GoLive$1.call(GoLive.java:111)
        at org.apache.solr.hadoop.GoLive$1.call(GoLive.java:94)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
        at sun.security.ssl.SSLSessionImpl.getPeerCertificates(SSLSessionImpl.java:397)
        at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:126)
        at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:437)
        at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
        at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
        at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:643)
        at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:479)
        at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
        at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
        at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
        at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:516)

Resolution:

export HADOOP_OPTS="$HADOOP_OPTS -Djavax.net.ssl.trustStore=truststore.jks -Djavax.net.ssl.trustStorePassword=password"

Wednesday, January 24, 2018

configure cloudera services/ssl using python scripting.

SSL configuration for  cloudere manager and services manually is tedious process, since there are many services cloudere Manager/Hue/hive/Imapala/solr/oozie/hdfs and their properties prviatekey/publickey/keypassword/truststorekey/ we need to update.

To make it simple using python api , we can configure all the setup without taking much time.

Steps:

1) first install and configure the anconda python
2) install the cm-api python module offered by cloudera
     pip install cm-api

Then create below script and run


#!/usr/bin/env python

import socket
from cm_api.api_client import ApiResource
from cm_api.api_client import ApiException
from cm_api.endpoints.cms import ClouderaManager
from cm_api.endpoints.services import ApiService
import ssl
import json
import sys



CM_HOST = "cm.tanu.com"
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
cxt = ssl.create_default_context(cafile="ca_trust_store.pem")

#api = ApiResource(CM_HOST,version=12, username="admin", password="admin",use_tls=True,ssl_context=cxt) ### IF CM already configured with SSL
api = ApiResource(CM_HOST,version=12, username="admin", password="admin") #For Non-ssl CM

############ Update the keystore and trustore and pem file location for each service ###########

hdfs_ssl_enable = { 'ssl_client_truststore_location':'/opt/pki/etc/tca/test123.jks','ssl_client_truststore_password':'test123','hdfs_hadoop_ssl_enabled' : 'true','ssl_server_keystore_location' : '/app/opt/cloudera/certs/jks/javakeystore.jks','ssl_server_keystore_password':'test123','ssl_server_keystore_keypassword':'test123' }
hdfs_httpfs_ssl_enable = { 'httpfs_https_truststore_file':'/opt/pki/etc/tca/test123.jks','httpfs_https_truststore_password':'test123','httpfs_use_ssl' : 'true','httpfs_https_keystore_file' : '/app/opt/cloudera/certs/jks/javakeystore.jks','httpfs_https_keystore_password':'test123' }
yarn_ssl_enable = { 'ssl_server_keystore_location' : '/app/opt/cloudera/certs/jks/javakeystore.jks','ssl_server_keystore_password':'test123','ssl_server_keystore_keypassword':'test123' }
cm_ssl_conf = {'WEB_TLS':'true','KEYSTORE_PATH':'/opt/cloudera-manager/ssl/jks/javakeystore.jks','KEYSTORE_PASSWORD':'test123','TRUSTSTORE_PATH':'/opt/pki/etc/tca/test123.jks','TRUSTSTORE_PASSWORD':'test123'}
cm_managed_service = {'ssl_client_truststore_location':'/opt/pki/etc/tca/test123.jks','ssl_client_truststore_password':'test123'}
hbase_ssl_enable = { 'hbase_hadoop_ssl_enabled' : 'true','ssl_server_keystore_location' : '/app/opt/cloudera/certs/jks/javakeystore.jks','ssl_server_keystore_password':'test123','ssl_server_keystore_keypassword':'test123' }
oozie_role_ssl_enable = { 'oozie_https_truststore_file':'/opt/pki/etc/tca/test123.jks','oozie_https_truststore_password':'test123','oozie_https_keystore_file' : '/app/opt/cloudera/certs/jks/javakeystore.jks','oozie_https_keystore_password':'test123' }
oozie_ssl_enable = { 'oozie_use_ssl' : 'true'}
hive_ssl_enable={'hiveserver2_keystore_path':'/app/opt/cloudera/certs/jks/javakeystore.jks','hiveserver2_keystore_password':'test123','hiveserver2_truststore_file':'/opt/pki/etc/tca/test123.jks','hiveserver2_truststore_password':'test123','hiveserver2_enable_ssl':'true'}
solr_ssl_enable={'solr_https_keystore_file':'/app/opt/cloudera/certs/jks/javakeystore.jks','solr_https_keystore_password':'test123','solr_https_truststore_file':'/opt/pki/etc/tca/test123.jks','solr_https_truststore_password':'test123','solr_use_ssl':'true'}

impala_ssl_enable={"client_services_ssl_enabled": "true", "ssl_server_certificate": "/app/opt/cloudera/certs/pem/cert.pem","ssl_private_key_password": "test123", "ssl_client_ca_certificate": "/opt/pki/etc/tca/test123.pem", "ssl_private_key": "/app/opt/cloudera/certs/pem/key.pem"}
impala_BASE_ssl_enable={"webserver_private_key_file": "/app/opt/cloudera/certs/pem/key.pem", "webserver_certificate_file": "/app/opt/cloudera/certs/pem/cert.pem", "webserver_private_key_password_cmd": "test123"}
impala_STATESTORE_ssl_enable={"webserver_private_key_file": "/app/opt/cloudera/certs/pem/key.pem", "webserver_certificate_file": "/app/opt/cloudera/certs/pem/cert.pem","webserver_private_key_password_cmd": "test123"}
impala_CATALOGSERVER_ssl_enable={"webserver_private_key_file": "/app/opt/cloudera/certs/pem/key.pem", "webserver_certificate_file": "/app/opt/cloudera/certs/pem/cert.pem","webserver_private_key_password_cmd": "test123"}

hue_SERVER_role_enable_ssl={"ssl_certificate": "/app/opt/cloudera/certs/pem/cert.pem", "ssl_private_key_password": "test123", "ssl_cacerts": "/opt/pki/etc/tca/test123.pem", "ssl_enable": "true", "ssl_private_key": "/app/opt/cloudera/certs/pem/key.pem"}


### UPDATE CLOUDERA MANAGER ##
cm=ClouderaManager(api)
cm.update_config(cm_ssl_conf)

#### UPDATE HDFS SSL CONFIG ###
hdfs=clu.get_service('hdfs')
hdfs.update_config(hdfs_ssl_enable)

###  UPDATE HTTPFS SSL CONFIG ###
httpfs_role_group=hdfs.get_role_config_group("hdfs-HTTPFS-BASE")
httpfs_role_group.update_config(hdfs_httpfs_ssl_enable)

###  UPDATE YARN SSL CONFIG ###
yarn=clu.get_service('yarn')
yarn.update_config(yarn_ssl_enable)

#### UPDATE HBASE SSL CONFIG ###
print("Updating Hbase SSL Config")
hbase=clu.get_service('hbase')
hbase.update_config(hbase_ssl_enable)
print(hbase.get_config())

#### UPDATE OOZIE SSL CONFIG ###
oozie=clu.get_service('oozie')
oozie.update_config(oozie_ssl_enable)
oozie_role_group=oozie.get_role_config_group("oozie-OOZIE_SERVER-BASE")
oozie_role_group.update_config(oozie_role_ssl_enable)


#### UPDATE HIVE SSL CONFIG ###
hive=clu.get_service('hive')
hive.update_config(hive_ssl_enable)

#### UPDATE solr SSL CONFIG ###
solr=clu.get_service('solr')
solr.update_config(solr_ssl_enable)

### IMPALA SSL UPDATE ###
impala=clu.get_service('impala')
impala.update_config(impala_ssl_enable)
impala_IMPALAD_role_group=impala.get_role_config_group("impala-IMPALAD-BASE")
impala_STATESTORE_role_group=impala.get_role_config_group("impala-STATESTORE-BASE")
impala_CATALOGSERVER_role_group=impala.get_role_config_group("impala-CATALOGSERVER-BASE")
impala_IMPALAD_role_group.update_config(impala_BASE_ssl_enable)
impala_STATESTORE_role_group.update_config(impala_STATESTORE_ssl_enable)
impala_CATALOGSERVER_role_group.update_config(impala_CATALOGSERVER_ssl_enable)


### UPDATE HUE SSL ###
hue=clu.get_service('hue')
hue_SERVER_role_group=hue.get_role_config_group("hue-HUE_SERVER-BASE")
hue_SERVER_role_group.update_config(hue_SERVER_role_enable_ssl)


#############################
you can also use this script to update other configurations as well. you can use below line to dump the existing configurations and update with your own value and run it.

to dump the existing configuration :

For Impala : 

impala=clu.get_service("impala")
y=impala.get_config(view="summary")
json.dump(y, sys.stdout)
for role in impala.get_all_role_config_groups():
        print(role)
        print("--------------------------------")
        x=role.get_config(view="summary")
        json.dump(x, sys.stdout)

output

[{"client_services_ssl_enabled": "true", "ssl_server_certificate": "/app/opt/cloudera/certs/pem/cert.pem", "admission_control_enabled": "true", "hbase_service": "hbase", "hive_service": "hive", "hdfs_service": "hdfs", "ssl_private_key_password": "test123", "ssl_client_ca_certificate": "/opt/pki/etc/tca/test123.pem", "ssl_private_key": "/app/opt/cloudera/certs/pem/key.pem", "all_admission_control_enabled": "true"}, {}]<ApiRoleConfigGroup>: impala-IMPALAD-BASE (cluster: cluster; service: impala)
--------------------------------
{"webserver_private_key_file": "/app/opt/cloudera/certs/pem/key.pem", "impalad_memory_limit": "17179869184", "enable_audit_event_log": "true", "scratch_dirs": "/app/hadoop/impala/impalad", "webserver_certificate_file": "/app/opt/cloudera/certs/pem/cert.pem", "log_dir": "/app/var/log/impalad", "lineage_event_log_dir": "/app/var/log/impalad/lineage", "webserver_private_key_password_cmd": "test123"}<ApiRoleConfigGroup>: impala-STATESTORE-BASE (cluster: cluster; service: impala)
--------------------------------
{"webserver_certificate_file": "/app/opt/cloudera/certs/pem/cert.pem", "log_threshold": "DEBUG", "webserver_private_key_password_cmd": "test123", "log_dir": "/app/var/log/statestore", "webserver_private_key_file": "/app/opt/cloudera/certs/pem/key.pem"}<ApiRoleConfigGroup>: impala-CATALOGSERVER-BASE (cluster: cluster; service: impala)
--------------------------------
{"catalogd_embedded_jvm_heapsize": "34359738368", "webserver_private_key_file": "/app/opt/cloudera/certs/pem/key.pem", "webserver_certificate_file": "/app/opt/cloudera/certs/pem/cert.pem", "load_catalog_in_background": "true", "log_dir": "/app/var/log/catalogd", "oom_heap_dump_enabled": "false", "webserver_private_key_password_cmd": "test123"}<ApiRoleConfigGroup>: impala-LLAMA-BASE (cluster: cluster; service: impala)


For HDFS :

hdfs=clu.get_service("hdfs")
y=hdfs.get_config(view="summary")
json.dump(y, sys.stdout)
for role in hdfs.get_all_role_config_groups():
        print(role)
        print("--------------------------------")
        x=role.get_config(view="summary")
        json.dump(x, sys.stdout)


Output

[{"hdfs_hadoop_ssl_enabled": "true", "core_site_safety_valve": "<property>        <name>hadoop.user.group.static.mapping.overrides</name>        <value>dr.who=;mapred=mapred,hadoop;impala=impala,hive,yarn,hadoop;</value>\r\n</property>\r\n", "ssl_server_keystore_password": "test123", "kms_service": "kms", "dfs_namenode_acls_enabled": "true", "ssl_server_keystore_keypassword": "test123", "dfs_block_local_path_access_user": "impala", "ssl_server_keystore_location": "/app/opt/cloudera/certs/jks/javakeystore.jks", "ssl_client_truststore_password": "test123", "audit_event_log_dir": "/app/var/log/hadoop-hdfs/audit", "dfs_replication": "1", "ssl_client_truststore_location": "/opt/pki/etc/tca/test123.jks"}, {}]<ApiRoleConfigGroup>: hdfs-DATANODE-BASE (cluster: cluster; service: hdfs)
--------------------------------
{"dfs_datanode_max_xcievers": "8192", "oom_heap_dump_enabled": "false", "dfs_datanode_data_dir_perm": "755", "datanode_log_dir": "/app/var/log/hadoop-hdfs", "dfs_datanode_volume_choosing_policy": "org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy", "dfs_data_dir_list": "/app/hadoop/data/d01/dfs/dn,/app/hadoop/data/d02/dfs/dn,/app/hadoop/data/d03/dfs/dn,/app/hadoop/data/d04/dfs/dn,/app/hadoop/data/d05/dfs/dn,/app/hadoop/data/d06/dfs/dn,/app/hadoop/data/d07/dfs/dn,/app/hadoop/data/d08/dfs/dn,/app/hadoop/data/d09/dfs/dn,/app/hadoop/data/d10/dfs/dn,/app/hadoop/data/d11/dfs/dn,/app/hadoop/data/d12/dfs/dn,/app/hadoop/data/d13/dfs/dn,/app/hadoop/data/d14/dfs/dn,/app/hadoop/data/d15/dfs/dn,/app/hadoop/data/d16/dfs/dn,/app/hadoop/data/d17/dfs/dn,/app/hadoop/data/d18/dfs/dn,/app/hadoop/data/d19/dfs/dn,/app/hadoop/data/d20/dfs/dn", "dfs_datanode_failed_volumes_tolerated": "10"}<ApiRoleConfigGroup>: hdfs-NAMENODE-BASE (cluster: cluster; service: hdfs)
--------------------------------
{"dfs_name_dir_list": "/app/nn", "dfs_namenode_servicerpc_address": "8022", "namenode_log_dir": "/app/var/log/hadoop-hdfs", "fs_trash_interval": "60", "oom_heap_dump_enabled": "false", "dfs_safemode_min_datanodes": "0"}<ApiRoleConfigGroup>: hdfs-FAILOVERCONTROLLER-BASE (cluster: cluster; service: hdfs)
--------------------------------
{"oom_heap_dump_enabled": "false"}<ApiRoleConfigGroup>: hdfs-BALANCER-BASE (cluster: cluster; service: hdfs)
--------------------------------
{}<ApiRoleConfigGroup>: hdfs-GATEWAY-BASE (cluster: cluster; service: hdfs)
--------------------------------
{"hdfs_client_config_safety_valve": "<property>\r\n<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>\r\n<value>NEVER</value>\r\n</property>", "dfs_client_use_trash": "true"}<ApiRoleConfigGroup>: hdfs-SECONDARYNAMENODE-BASE (cluster: cluster; service: hdfs)
--------------------------------
{"secondarynamenode_log_dir": "/app/var/log/hadoop-hdfs", "oom_heap_dump_enabled": "false", "fs_checkpoint_dir_list": "/app/nn/dfs/snn"}<ApiRoleConfigGroup>: hdfs-JOURNALNODE-BASE (cluster: cluster; service: hdfs)
--------------------------------
{"oom_heap_dump_enabled": "false"}<ApiRoleConfigGroup>: hdfs-HTTPFS-BASE (cluster: cluster; service: hdfs)
--------------------------------
{"httpfs_https_keystore_file": "/app/opt/cloudera/certs/jks/javakeystore.jks", "httpfs_https_keystore_password": "test123", "httpfs_https_truststore_password": "test123", "httpfs_https_truststore_file": "/opt/pki/etc/tca/test123.jks", "oom_heap_dump_enabled": "false", "httpfs_use_ssl": "true"}<ApiRoleConfigGroup>: hdfs-NFSGATEWAY-BASE (cluster: cluster; service: hdfs)


Friday, January 19, 2018

mounting Windows share folder in linux using cifs


In this post i am going to tell you how to mount windows share folder \\windows_server\sharefolder in Linux server using cifs protocol.

i assume that you have already created share folder \\windows_server\sharefolder and granted access to user1

create password file for user1 to protect the credentials from the cifs mounting visible to everyone and set permission to root only with 700.


Install sudo yum install cifs-utils rpm

edit /etc/fstab and add below line

//windows_server/sharefolder /win_share_folder cifs credentials=/root/passwords/cif_passwd,uid=user1_uid 0 0

mount -a

now you should be able to see the windows share folder.

while mounting i passed uid=user1_uid. if you don't specify that option only root user can create/copy/delete files in the /win_share_folder.