Skip to content
Advertisement

How to specify role at node level within Akka cluster?

Given the following appliction.conf :

akka {
  loglevel = debug
  actor {
    provider = cluster

    serialization-bindings {
      "sample.cluster.CborSerializable" = jackson-cbor
    }
  }
  remote {
    artery {
      canonical.hostname = "127.0.0.1"
      canonical.port = 0
    }
  }
  cluster {
roles= ["testrole1" , "testrole2"]
    seed-nodes = [
      "akka://ClusterSystem@127.0.0.1:25251",
      "akka://ClusterSystem@127.0.0.1:25252"]
    downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
  }
}

To discern between the roles within an Actor I use :

void register(Member member) {

if (member.hasRole("testrole1")) {
 //start actor a1
}
else if (member.hasRole("testrole2")) {
 //start actor a2
}
}

edited from src (https://doc.akka.io/docs/akka/current/cluster-usage.html)

To enable role for a node I use the following config :

Within application.conf I configure the array for the roles but this appears to be at the cluster level rather than node level. In other words it does not seem possible to configure application.conf such that the Akka cluster is instructed to start actor a1 on node n1 and actor a2 on node n2? Should note details be specified at the level of akka.cluster in application.conf ?

For each node, is it required to specify multiple application.conf configuration files?

For example, application.conf for testrole1

akka {
  loglevel = debug
  actor {
    provider = cluster

    serialization-bindings {
      "sample.cluster.CborSerializable" = jackson-cbor
    }
  }
  remote {
    artery {
      canonical.hostname = "127.0.0.1"
      canonical.port = 0
    }
  }
  cluster {
roles= ["testrole1"]
    seed-nodes = [
      "akka://ClusterSystem@127.0.0.1:25251",
      "akka://ClusterSystem@127.0.0.1:25252"]
    downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
  }
}

application.conf for testrole2 :

akka {
  loglevel = debug
  actor {
    provider = cluster

    serialization-bindings {
      "sample.cluster.CborSerializable" = jackson-cbor
    }
  }
  remote {
    artery {
      canonical.hostname = "127.0.0.1"
      canonical.port = 0
    }
  }
  cluster {
roles= ["testrole2"]
    seed-nodes = [
      "akka://ClusterSystem@127.0.0.1:25251",
      "akka://ClusterSystem@127.0.0.1:25252"]
    downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
  }
}

The difference between each application.conf defined above is the value of akka.cluster.roles is either “testrole1" or "testrole2".

How should application.conf be configured such that the Akka cluster is instructed to start actor a1 on node n1 and actor a2 on node n2? Should node details be specified at the level of akka.cluster in application.conf ?

Update:

Another option is to pass the rolename via an environment variable? I’ve just noticed this is explicitly stated here: https://doc.akka.io/docs/akka/current/typed/cluster.html “The node roles are defined in the configuration property named akka.cluster.roles and typically defined in the start script as a system property or environment variable.” In this scenario, use the same application.conf file for all nodes but each node uses an environment variable. For example, an updated appliction.conf (note addition of “ENV_VARIABLE”)

akka {
  loglevel = debug
  actor {
    provider = cluster

    serialization-bindings {
      "sample.cluster.CborSerializable" = jackson-cbor
    }
  }
  remote {
    artery {
      canonical.hostname = "127.0.0.1"
      canonical.port = 0
    }
  }
  cluster {
roles= ["ENV_VARIABLE"]
    seed-nodes = [
      "akka://ClusterSystem@127.0.0.1:25251",
      "akka://ClusterSystem@127.0.0.1:25252"]
    downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
  }
}

Cluster startup scripts determine the role for each node via the ENV_VARIABLE parameter, is this a viable solution?

Advertisement

Answer

If you’re going to assign different roles to different nodes, those nodes cannot use the same configuration. The easiest way to accomplish this is through n1 having "testRole1" in its akka.cluster.roles list and n2 having "testRole2" in its akka.cluster.roles list.

Everything in akka.cluster config is only configuring that node for participation in the cluster (it’s configuring the cluster component on that node). A few of the settings have to be the same across the nodes of a cluster (e.g. the SBR settings), but a setting on n1 doesn’t affect a setting on n2.

User contributions licensed under: CC BY-SA
7 People found this is helpful
Advertisement