-
Notifications
You must be signed in to change notification settings - Fork 150
Description
I’ve been using erpc in my project, and it has been working very well.
My workflow is: coding everything locally and then running it on Azure. What I'm doing is: using rdma link add type rxe command on Ubuntu to create a soft RoCE device; and then using Infiniband transport and RoCE enabled in eRPC. While this approach works, I find the soft RoCE setup somewhat inconvenient.
I was wondering if it would be feasible to implement a simple socket-based transport under transport_impl, which is widely supported by all new or old servers. So that
- I don't need to depend on anything else (e.g., rdma-core);
- I don't need to change any code in my project.
Also, it is just for development purposes; there are no performance considerations.
If this sounds reasonable, I’d greatly appreciate any insights on how to approach implementing this transport in eRPC. I’m happy to explore adding it myself.
The exposed interfaces would look like below (copy it from fake_transport.h)
{
inline void init_hugepage_structures(HugeAlloc *, uint8_t **) {}
inline void fill_local_routing_info(routing_info_t *) const {}
inline bool resolve_remote_routing_info(routing_info_t *) { return false; }
inline size_t get_bandwidth() const { return 0; }
static std::string routing_info_str(routing_info_t *) { return ""; }
void tx_burst(const tx_burst_item_t *, size_t) {}
void tx_flush() {}
size_t rx_burst() { return 0; }
void post_recvs(size_t) {}
}