net_kernel
Erlang Networking Kernel
The net kernel is a system process, registered as
net_kernel
, which must be running for distributed Erlang
to work. The purpose of this process is to implement parts of
the BIFs spawn/4
and spawn_link/4
, and to provide
monitoring of the network.
An Erlang node is started using the command line flag
-name
or -sname
:
$ erl -sname foobar
It is also possible to call net_kernel:start([foobar])
directly from the normal Erlang shell prompt:
1> net_kernel:start([foobar, shortnames]).
{ok,<0.64.0>}
(foobar@gringotts)2>
If the node is started with the command line flag -sname
,
the node name will be foobar@Host
, where Host
is
the short name of the host (not the fully qualified domain name).
If started with the -name
flag, Host
is the fully
qualified domain name. See erl(1)
.
Normally, connections are established automatically when
another node is referenced. This functionality can be disabled
by setting the Kernel configuration parameter
dist_auto_connect
to false
, see
kernel(6). In this case,
connections must be established explicitly by calling
net_kernel:connect_node/1
.
Which nodes are allowed to communicate with each other is handled by the magic cookie system, see Distributed Erlang in the Erlang Reference Manual.
Functions
allow/1
Limits access to the specified set of nodes. Any access
attempts made from (or to) nodes not in
will be
rejected.
Returns error
if any element in
is not
an atom.
connect_node/1
Establishes a connection to
. Returns true
if successful, false
if not, and ignored
if
the local node is not alive.
monitor_nodes/1
monitor_nodes/2
The calling process subscribes or unsubscribes to node
status change messages. A nodeup
message is delivered
to all subscribing process when a new node is connected, and
a nodedown
message is delivered when a node is
disconnected.
If
is true
, a new subscription is started.
If
is false
, all previous subscriptions --
started with the same
-- are stopped. Two
option lists are considered the same if they contain the same
set of options.
As of kernel
version 2.11.4, and erts
version
5.5.4, the following is guaranteed:
nodeup
messages will be delivered before delivery of any message from the remote node passed through the newly established connection.nodedown
messages will not be delivered until all messages from the remote node that have been passed through the connection have been delivered.
Note, that this is not guaranteed for kernel
versions before 2.11.4.
As of kernel
version 2.11.4 subscriptions can also be
made before the net_kernel
server has been started,
i.e., net_kernel:monitor_nodes/[1,2]
does not return
ignored
.
As of kernel
version 2.13, and erts
version
5.7, the following is guaranteed:
nodeup
messages will be delivered after the corresponding node appears in results fromerlang:nodes/X
.nodedown
messages will be delivered after the corresponding node has disappeared in results fromerlang:nodes/X
.
Note, that this is not guaranteed for kernel
versions before 2.13.
The format of the node status change messages depends on
. If
is [], which is the default,
the format is:
{nodeup, Node} | {nodedown, Node} Node = node()
If
, the format is:
{nodeup, Node, InfoList} | {nodedown, Node, InfoList} Node = node() InfoList = [{Tag, Val}]
InfoList
is a list of tuples. Its contents depends on
, see below.
Also, when OptionList == []
only visible nodes, that
is, nodes that appear in the result of
nodes/0, are
monitored.
can be any of the following:
{node_type, NodeType}
Currently valid values for NodeType
are:
visible
{node_type, visible}
is
included in InfoList
.hidden
{node_type, hidden}
is
included in InfoList
.all
{node_type, visible | hidden}
is included in
InfoList
.nodedown_reason
The tuple {nodedown_reason, Reason}
is included in
InfoList
in nodedown
messages. Reason
can be:
connection_setup_failed
nodeup
messages had been sent).no_network
net_kernel_terminated
net_kernel
process terminated.shutdown
connection_closed
disconnect
net_tick_timeout
send_net_tick_failed
get_status_failed
Port
holding the connection failed.get_net_ticktime/0
Gets net_ticktime
(see
kernel(6)).
Currently defined return values (
):
NetTicktime
net_ticktime
is
seconds.
{ongoing_change_to, NetTicktime }
net_kernel
is currently changing
net_ticktime
to
seconds.
ignored
The local node is not alive.
set_net_ticktime/1
set_net_ticktime/2
Sets net_ticktime
(see
kernel(6)) to
seconds.
defaults
to 60.
Some definitions:
MTTI
)minimum(
milliseconds.
The time of the least number of consecutive MTTI
s
to cover
seconds following
the call to set_net_ticktime/2
(i.e.
((
milliseconds).
If <anno>NetTicktime</anno> < PreviousNetTicktime
, the actual
net_ticktime
change will be done at the end of
the transition period; otherwise, at the beginning. During
the transition period, net_kernel
will ensure that
there will be outgoing traffic on all connections at least
every MTTI
millisecond.
Note!
The net_ticktime
changes have to be initiated on all
nodes in the network (with the same
)
before the end of any transition period on any node;
otherwise, connections may erroneously be disconnected.
Returns one of the following:
unchanged
net_ticktime
already had the value of
and was left unchanged.
change_initiated
net_kernel
has initiated the change of
net_ticktime
to
seconds.
{ongoing_change_to, NewNetTicktime }
The request was ignored; because,
net_kernel
was busy changing net_ticktime
to
seconds.
start([Name]) -> {ok, pid()} | {error, Reason}
start([Name, NameType]) -> {ok, pid()} | {error, Reason}
start([Name, NameType, Ticktime]) -> {ok, pid()} | {error, Reason}
Name = atom()
NameType = shortnames | longnames
Reason = {already_started, pid()} | term()
Note that the argument is a list with exactly one, two or
three arguments. NameType
defaults to longnames
and Ticktime
to 15000.
Turns a non-distributed node into a distributed node by
starting net_kernel
and other necessary processes.
stop/0
Turns a distributed node into a non-distributed node. For
other nodes in the network, this is the same as the node
going down. Only possible when the net kernel was started
using start/1
, otherwise returns
{error, not_allowed}
. Returns {error, not_found}
if the local node is not alive.