Overview
Request 583860 accepted
- Install /etc/spark/spark-env . This script is automatically
read during startup and can be used for custom configuration
- Install /etc/spark/spark-defaults.conf
- Create /run/spark dir via systemd tmpfiles
- Add missing Requires/BuildRequires for systemd
- Drop openstack-suse-macros BuildRequires and use the typical
way to create a spark user/group and homedir
- Add useful description
Request History
tbechtold created request
- Install /etc/spark/spark-env . This script is automatically
read during startup and can be used for custom configuration
- Install /etc/spark/spark-defaults.conf
- Create /run/spark dir via systemd tmpfiles
- Add missing Requires/BuildRequires for systemd
- Drop openstack-suse-macros BuildRequires and use the typical
way to create a spark user/group and homedir
- Add useful description
dirkmueller accepted request
+- Install /etc/spark/spark-defaults.conf
+install -D -m 755 %{name}-%{version}/dist/conf/spark-env.sh.template %{buildroot}/%{_sysconfdir}/spark/spark-env
This can be removed. spark-env.sh.template is in a different format it has variables which are set using export var=val syntax whereas for systemd purposes the variables in spark-env EnvironmentFile should be var=val syntax
ExecStart=/usr/bin/java \ -cp "/usr/share/spark/lib/*" \ org.apache.spark.deploy.master.Master
We want to also allow the users to set java heap size -Xmx and -(Xms options I dont think those get passed automatically when you invoke java those were being set by $SPARK_DAEMON_JAVA_OPTS variable. I have a feeling same is true for $SPARK_MASTERS in spark worker (since its a variable that contain multiple masters master1:port,master2:port Not sure about --ip, --port if they get picked from env automagically, will have to test it)
But in general I like calling out options explicity when invoking java